[dpdk-dev,v3,3/5] bbdev: test applications

Message ID 1512682857-79467-3-git-send-email-amr.mokhtar@intel.com (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers

Checks

Context Check Description
ci/checkpatch warning coding style issues
ci/Intel-compilation success Compilation OK

Commit Message

Mokhtar, Amr Dec. 7, 2017, 9:40 p.m. UTC
  - Full test suite for bbdev
- Test App works seamlessly on all PMDs registered with bbdev
 framework
- A python script is provided to make our life easier
- Supports execution of tests by parsing Test Vector files
- Test Vectors can be added/deleted/modified with no need for
 re-compilation
- Various Tests can be executed:
 (a) Throughput test
 (b) Offload latency test
 (c) Operation latency test
 (d) Validation test
 (c) Sanity checks

Signed-off-by: Amr Mokhtar <amr.mokhtar@intel.com>
---
 app/Makefile                                       |    4 +
 app/test-bbdev/Makefile                            |   53 +
 app/test-bbdev/main.c                              |  353 ++++
 app/test-bbdev/main.h                              |  148 ++
 app/test-bbdev/test-bbdev.py                       |  139 ++
 app/test-bbdev/test_bbdev.c                        | 1406 +++++++++++++
 app/test-bbdev/test_bbdev_perf.c                   | 2193 ++++++++++++++++++++
 app/test-bbdev/test_bbdev_vector.c                 |  963 +++++++++
 app/test-bbdev/test_bbdev_vector.h                 |   99 +
 app/test-bbdev/test_vectors/bbdev_vector_null.data |   32 +
 .../test_vectors/bbdev_vector_td_default.data      |   81 +
 .../test_vectors/bbdev_vector_te_default.data      |   60 +
 12 files changed, 5531 insertions(+)
 create mode 100644 app/test-bbdev/Makefile
 create mode 100644 app/test-bbdev/main.c
 create mode 100644 app/test-bbdev/main.h
 create mode 100755 app/test-bbdev/test-bbdev.py
 create mode 100644 app/test-bbdev/test_bbdev.c
 create mode 100644 app/test-bbdev/test_bbdev_perf.c
 create mode 100644 app/test-bbdev/test_bbdev_vector.c
 create mode 100644 app/test-bbdev/test_bbdev_vector.h
 create mode 100644 app/test-bbdev/test_vectors/bbdev_vector_null.data
 create mode 100644 app/test-bbdev/test_vectors/bbdev_vector_td_default.data
 create mode 100644 app/test-bbdev/test_vectors/bbdev_vector_te_default.data
  

Comments

Ferruh Yigit Dec. 11, 2017, 7:01 p.m. UTC | #1
On 12/7/2017 1:40 PM, Amr Mokhtar wrote:
> - Full test suite for bbdev
> - Test App works seamlessly on all PMDs registered with bbdev
>  framework
> - A python script is provided to make our life easier

Can you please describe what the script is for?

> - Supports execution of tests by parsing Test Vector files

Can you please describe what are test vector files?

> - Test Vectors can be added/deleted/modified with no need for
>  re-compilation
> - Various Tests can be executed:
>  (a) Throughput test
>  (b) Offload latency test
>  (c) Operation latency test
>  (d) Validation test
>  (c) Sanity checks
> 
> Signed-off-by: Amr Mokhtar <amr.mokhtar@intel.com>

<...>

> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +ifeq ($(CONFIG_RTE_TEST_BBDEV),y)

You don't need this ifdef I think, although I can see testpmd has it ...

> +
> +#
> +# library name
> +#
> +APP = testbbdev
> +
> +CFLAGS += -O3
> +CFLAGS += $(WERROR_FLAGS)
> +
> +#
> +# all sources are stored in SRCS-y
> +#
> +SRCS-$(CONFIG_RTE_LIBRTE_BBDEV) += main.c
> +SRCS-$(CONFIG_RTE_LIBRTE_BBDEV) += test_bbdev.c
> +SRCS-$(CONFIG_RTE_LIBRTE_BBDEV) += test_bbdev_perf.c
> +SRCS-$(CONFIG_RTE_LIBRTE_BBDEV) += test_bbdev_vector.c

If you remove above wrapping ifdef, you may use CONFIG_RTE_TEST_BBDEV instead.

<...>

> +int
> +unit_test_suite_runner(struct unit_test_suite *suite)

Is test-bbdev application a suit of test cases? What is the benefit of having
separate application comparing adding unit tests to test/ folder?

<...>
  
Mokhtar, Amr Dec. 23, 2017, 12:09 a.m. UTC | #2
> -----Original Message-----

> From: Yigit, Ferruh

> Sent: Monday 11 December 2017 19:01

> To: Mokhtar, Amr <amr.mokhtar@intel.com>; dev@dpdk.org

> Cc: thomas@monjalon.net; Burakov, Anatoly <anatoly.burakov@intel.com>; De

> Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; Power, Niall

> <niall.power@intel.com>; Macnamara, Chris <chris.macnamara@intel.com>

> Subject: Re: [dpdk-dev] [PATCH v3 3/5] bbdev: test applications

> 

> On 12/7/2017 1:40 PM, Amr Mokhtar wrote:

> > - Full test suite for bbdev

> > - Test App works seamlessly on all PMDs registered with bbdev

> >  framework

> > - A python script is provided to make our life easier

> 

> Can you please describe what the script is for?


In order to run a test many parameters are required, like EAL, test vectors to use
the type of test (latency, offload, throughput), the binary application location,
number of operations, bursts size, etc. The script eases all that, and assumes
default values, in case they were not provided by the tester.
In its simplest form, it can execute as a smoke test by executing:
$ ./test-bbdev.py

This will run a default test with default parameters on bbdev_null device.

> 

> > - Supports execution of tests by parsing Test Vector files

> 

> Can you please describe what are test vector files?


Test vectors are set of input and output parameters when combined together
form one test scenario. These values are logically correlated to provide
a realistic test scenario. Furthermore, they include the data in the format that 
matches the module under test.

In our case, we are concerned of testing various Turbo coding scenarios, so when
testing a decode operation for example, the input data in the test should match
the Turbo encoded data as specified in 3GPP TS 36.212 specs, namely the "virtual
circular buffer."

Test-bbdev application parses the parameters from the test vector file into its
local memory to form the op (operation) structure and loads the data into the
input mbufs and saves the expected output. It enqueues the op structures, 
dequeues them, then compares the computed output with the expected output.
If matching, the test is passed.

> <...>

> 

> > +include $(RTE_SDK)/mk/rte.vars.mk

> > +

> > +ifeq ($(CONFIG_RTE_TEST_BBDEV),y)

> 

> You don't need this ifdef I think, although I can see testpmd has it ...


Corrected.

> 

> > +

> > +#

> > +# library name

> > +#

> > +APP = testbbdev

> > +

> > +CFLAGS += -O3

> > +CFLAGS += $(WERROR_FLAGS)

> > +

> > +#

> > +# all sources are stored in SRCS-y

> > +#

> > +SRCS-$(CONFIG_RTE_LIBRTE_BBDEV) += main.c

> > +SRCS-$(CONFIG_RTE_LIBRTE_BBDEV) += test_bbdev.c

> > +SRCS-$(CONFIG_RTE_LIBRTE_BBDEV) += test_bbdev_perf.c

> > +SRCS-$(CONFIG_RTE_LIBRTE_BBDEV) += test_bbdev_vector.c

> 

> If you remove above wrapping ifdef, you may use CONFIG_RTE_TEST_BBDEV

> instead.


Corrected.
 
> <...>

> 

> > +int

> > +unit_test_suite_runner(struct unit_test_suite *suite)

> 

> Is test-bbdev application a suit of test cases? What is the benefit of having

> separate application comparing adding unit tests to test/ folder?


Yes it is a suite of test ceses.
The benefit is to have all test applications self-contained and separate.
So, every device abstraction library has its own test-xxx folder.

> <...>
  

Patch

diff --git a/app/Makefile b/app/Makefile
index 7ea02b0..6b0ed67 100644
--- a/app/Makefile
+++ b/app/Makefile
@@ -43,4 +43,8 @@  ifeq ($(CONFIG_RTE_LIBRTE_EVENTDEV),y)
 DIRS-$(CONFIG_RTE_APP_EVENTDEV) += test-eventdev
 endif
 
+ifeq ($(CONFIG_RTE_LIBRTE_BBDEV),y)
+DIRS-$(CONFIG_RTE_TEST_BBDEV) += test-bbdev
+endif
+
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/app/test-bbdev/Makefile b/app/test-bbdev/Makefile
new file mode 100644
index 0000000..29c9be5
--- /dev/null
+++ b/app/test-bbdev/Makefile
@@ -0,0 +1,53 @@ 
+#   BSD LICENSE
+#
+#   Copyright(c) 2017 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+ifeq ($(CONFIG_RTE_TEST_BBDEV),y)
+
+#
+# library name
+#
+APP = testbbdev
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+#
+# all sources are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_BBDEV) += main.c
+SRCS-$(CONFIG_RTE_LIBRTE_BBDEV) += test_bbdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_BBDEV) += test_bbdev_perf.c
+SRCS-$(CONFIG_RTE_LIBRTE_BBDEV) += test_bbdev_vector.c
+
+include $(RTE_SDK)/mk/rte.app.mk
+
+endif
diff --git a/app/test-bbdev/main.c b/app/test-bbdev/main.c
new file mode 100644
index 0000000..d3e07a6
--- /dev/null
+++ b/app/test-bbdev/main.c
@@ -0,0 +1,353 @@ 
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <getopt.h>
+#include <inttypes.h>
+#include <stdio.h>
+#include <string.h>
+#include <stdbool.h>
+
+#include <rte_eal.h>
+#include <rte_common.h>
+#include <rte_string_fns.h>
+#include <rte_cycles.h>
+#include <rte_lcore.h>
+
+#include "main.h"
+
+/* Defines how many testcases can be specified as cmdline args */
+#define MAX_CMDLINE_TESTCASES 8
+
+static const char tc_sep = ',';
+
+static struct test_params {
+	struct test_command *test_to_run[MAX_CMDLINE_TESTCASES];
+	unsigned int num_tests;
+	unsigned int num_ops;
+	unsigned int burst_sz;
+	unsigned int num_lcores;
+	char test_vector_filename[PATH_MAX];
+} test_params;
+
+static struct test_commands_list commands_list =
+	TAILQ_HEAD_INITIALIZER(commands_list);
+
+void
+add_test_command(struct test_command *t)
+{
+	TAILQ_INSERT_TAIL(&commands_list, t, next);
+}
+
+int
+unit_test_suite_runner(struct unit_test_suite *suite)
+{
+	int test_result = TEST_SUCCESS;
+	unsigned int total = 0, skipped = 0, succeeded = 0, failed = 0;
+	uint64_t start, end;
+
+	printf(
+			"\n + ------------------------------------------------------- +\n");
+	printf(" + Starting Test Suite : %s\n", suite->suite_name);
+
+	start = rte_rdtsc_precise();
+
+	if (suite->setup) {
+		test_result = suite->setup();
+		if (test_result == TEST_FAILED) {
+			printf(" + Test suite setup %s failed!\n",
+					suite->suite_name);
+			printf(
+					" + ------------------------------------------------------- +\n");
+			return 1;
+		}
+		if (test_result == TEST_SKIPPED) {
+			printf(" + Test suite setup %s skipped!\n",
+					suite->suite_name);
+			printf(
+					" + ------------------------------------------------------- +\n");
+			return 0;
+		}
+	}
+
+	while (suite->unit_test_cases[total].testcase) {
+		if (suite->unit_test_cases[total].setup)
+			test_result = suite->unit_test_cases[total].setup();
+
+		if (test_result == TEST_SUCCESS)
+			test_result = suite->unit_test_cases[total].testcase();
+
+		if (suite->unit_test_cases[total].teardown)
+			suite->unit_test_cases[total].teardown();
+
+		if (test_result == TEST_SUCCESS) {
+			succeeded++;
+			printf(" + TestCase [%2d] : %s passed\n", total,
+					suite->unit_test_cases[total].name);
+		} else if (test_result == TEST_SKIPPED) {
+			skipped++;
+			printf(" + TestCase [%2d] : %s skipped\n", total,
+					suite->unit_test_cases[total].name);
+		} else {
+			failed++;
+			printf(" + TestCase [%2d] : %s failed\n", total,
+					suite->unit_test_cases[total].name);
+		}
+
+		total++;
+	}
+
+	/* Run test suite teardown */
+	if (suite->teardown)
+		suite->teardown();
+
+	end = rte_rdtsc_precise();
+
+	printf(" + ------------------------------------------------------- +\n");
+	printf(" + Test Suite Summary : %s\n", suite->suite_name);
+	printf(" + Tests Total :       %2d\n", total);
+	printf(" + Tests Skipped :     %2d\n", skipped);
+	printf(" + Tests Passed :      %2d\n", succeeded);
+	printf(" + Tests Failed :      %2d\n", failed);
+	printf(" + Tests Lasted :       %lg ms\n",
+			((end - start) * 1000) / (double)rte_get_tsc_hz());
+	printf(" + ------------------------------------------------------- +\n");
+
+	return (failed > 0) ? 1 : 0;
+}
+
+const char *
+get_vector_filename(void)
+{
+	return test_params.test_vector_filename;
+}
+
+unsigned int
+get_num_ops(void)
+{
+	return test_params.num_ops;
+}
+
+unsigned int
+get_burst_sz(void)
+{
+	return test_params.burst_sz;
+}
+
+unsigned int
+get_num_lcores(void)
+{
+	return test_params.num_lcores;
+}
+
+static void
+print_usage(const char *prog_name)
+{
+	struct test_command *t;
+
+	printf("Usage: %s [EAL params] [-- [-n/--num-ops NUM_OPS]\n"
+			"\t[-b/--burst-size BURST_SIZE]\n"
+			"\t[-v/--test-vector VECTOR_FILE]\n"
+			"\t[-c/--test-cases TEST_CASE[,TEST_CASE,...]]]\n",
+			prog_name);
+
+	printf("Available testcases: ");
+	TAILQ_FOREACH(t, &commands_list, next)
+		printf("%s ", t->command);
+	printf("\n");
+}
+
+static int
+parse_args(int argc, char **argv, struct test_params *tp)
+{
+	int opt, option_index;
+	unsigned int num_tests = 0;
+	bool test_cases_present = false;
+	bool test_vector_present = false;
+	struct test_command *t;
+	char *tokens[MAX_CMDLINE_TESTCASES];
+	int tc, ret;
+
+	static struct option lgopts[] = {
+		{ "num-ops", 1, 0, 'n' },
+		{ "burst-size", 1, 0, 'b' },
+		{ "test-cases", 1, 0, 'c' },
+		{ "test-vector", 1, 0, 'v' },
+		{ "lcores", 1, 0, 'l' },
+		{ "help", 0, 0, 'h' },
+		{ NULL,  0, 0, 0 }
+	};
+
+	while ((opt = getopt_long(argc, argv, "hn:b:c:v:l:", lgopts,
+			&option_index)) != EOF)
+		switch (opt) {
+		case 'n':
+			TEST_ASSERT(strlen(optarg) > 0,
+					"Num of operations is not provided");
+			tp->num_ops = strtol(optarg, NULL, 10);
+			break;
+		case 'b':
+			TEST_ASSERT(strlen(optarg) > 0,
+					"Burst size is not provided");
+			tp->burst_sz = strtol(optarg, NULL, 10);
+			TEST_ASSERT(tp->burst_sz <= MAX_BURST,
+					"Burst size mustn't be greater than %u",
+					MAX_BURST);
+			break;
+		case 'c':
+			TEST_ASSERT(test_cases_present == false,
+					"Test cases provided more than once");
+			test_cases_present = true;
+
+			ret = rte_strsplit(optarg, strlen(optarg),
+					tokens, MAX_CMDLINE_TESTCASES, tc_sep);
+
+			TEST_ASSERT(ret <= MAX_CMDLINE_TESTCASES,
+					"Too many test cases (max=%d)",
+					MAX_CMDLINE_TESTCASES);
+
+			for (tc = 0; tc < ret; ++tc) {
+				/* Find matching test case */
+				TAILQ_FOREACH(t, &commands_list, next)
+					if (!strcmp(tokens[tc], t->command))
+						tp->test_to_run[num_tests] = t;
+
+				TEST_ASSERT(tp->test_to_run[num_tests] != NULL,
+						"Unknown test case: %s",
+						tokens[tc]);
+				++num_tests;
+			}
+			break;
+		case 'v':
+			TEST_ASSERT(test_vector_present == false,
+					"Test vector provided more than once");
+			test_vector_present = true;
+
+			TEST_ASSERT(strlen(optarg) > 0,
+					"Config file name is null");
+
+			strncpy(tp->test_vector_filename, optarg,
+					sizeof(tp->test_vector_filename));
+			break;
+		case 'l':
+			TEST_ASSERT(strlen(optarg) > 0,
+					"Num of lcores is not provided");
+			tp->num_lcores = strtol(optarg, NULL, 10);
+			TEST_ASSERT(tp->num_lcores <= RTE_MAX_LCORE,
+					"Num of lcores mustn't be greater than %u",
+					RTE_MAX_LCORE);
+			break;
+		case 'h':
+			print_usage(argv[0]);
+			return 0;
+		default:
+			printf("ERROR: Unknown option: -%c\n", opt);
+			return -1;
+		}
+
+	if (tp->num_ops == 0) {
+		printf(
+			"WARNING: Num of operations was not provided or was set 0. Set to default (%u)\n",
+			DEFAULT_OPS);
+		tp->num_ops = DEFAULT_OPS;
+	}
+	if (tp->burst_sz == 0) {
+		printf(
+			"WARNING: Burst size was not provided or was set 0. Set to default (%u)\n",
+			DEFAULT_BURST);
+		tp->burst_sz = DEFAULT_BURST;
+	}
+	if (tp->num_lcores == 0) {
+		printf(
+			"WARNING: Num of lcores was not provided or was set 0. Set to value from RTE config (%u)\n",
+			rte_lcore_count());
+		tp->num_lcores = rte_lcore_count();
+	}
+
+	TEST_ASSERT(tp->burst_sz <= tp->num_ops,
+			"Burst size (%u) mustn't be greater than num ops (%u)",
+			tp->burst_sz, tp->num_ops);
+
+	tp->num_tests = num_tests;
+	return 0;
+}
+
+static int
+run_all_tests(void)
+{
+	int ret = TEST_SUCCESS;
+	struct test_command *t;
+
+	TAILQ_FOREACH(t, &commands_list, next)
+		ret |= t->callback();
+
+	return ret;
+}
+
+static int
+run_parsed_tests(struct test_params *tp)
+{
+	int ret = TEST_SUCCESS;
+	unsigned int i;
+
+	for (i = 0; i < tp->num_tests; ++i)
+		ret |= tp->test_to_run[i]->callback();
+
+	return ret;
+}
+
+int
+main(int argc, char **argv)
+{
+	int ret;
+
+	/* Init EAL */
+	ret = rte_eal_init(argc, argv);
+	if (ret < 0)
+		return 1;
+	argc -= ret;
+	argv += ret;
+
+	/* Parse application arguments (after the EAL ones) */
+	ret = parse_args(argc, argv, &test_params);
+	if (ret < 0) {
+		print_usage(argv[0]);
+		return 1;
+	}
+
+	rte_log_set_global_level(RTE_LOG_INFO);
+
+	/* If no argument provided - run all tests */
+	if (test_params.num_tests == 0)
+		return run_all_tests();
+	else
+		return run_parsed_tests(&test_params);
+}
diff --git a/app/test-bbdev/main.h b/app/test-bbdev/main.h
new file mode 100644
index 0000000..6c60759
--- /dev/null
+++ b/app/test-bbdev/main.h
@@ -0,0 +1,148 @@ 
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _MAIN_H_
+#define _MAIN_H_
+
+#include <stddef.h>
+#include <sys/queue.h>
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_log.h>
+
+#define TEST_SUCCESS    0
+#define TEST_FAILED     -1
+#define TEST_SKIPPED    1
+
+#define MAX_BURST 512U
+#define DEFAULT_BURST 32U
+#define DEFAULT_OPS 64U
+
+#define TEST_ASSERT(cond, msg, ...) do {  \
+		if (!(cond)) {  \
+			printf("TestCase %s() line %d failed: " \
+				msg "\n", __func__, __LINE__, ##__VA_ARGS__); \
+			return TEST_FAILED;  \
+		} \
+} while (0)
+
+/* Compare two buffers (length in bytes) */
+#define TEST_ASSERT_BUFFERS_ARE_EQUAL(a, b, len, msg, ...) do { \
+	if (memcmp((a), (b), len)) { \
+		printf("TestCase %s() line %d failed: " \
+			msg "\n", __func__, __LINE__, ##__VA_ARGS__); \
+		rte_memdump(stdout, "Buffer A", (a), len); \
+		rte_memdump(stdout, "Buffer B", (b), len); \
+		return TEST_FAILED; \
+	} \
+} while (0)
+
+#define TEST_ASSERT_SUCCESS(val, msg, ...) do { \
+		typeof(val) _val = (val); \
+		if (!(_val == 0)) { \
+			printf("TestCase %s() line %d failed (err %d): " \
+				msg "\n", __func__, __LINE__, _val, \
+				##__VA_ARGS__); \
+			return TEST_FAILED; \
+		} \
+} while (0)
+
+#define TEST_ASSERT_FAIL(val, msg, ...) \
+	TEST_ASSERT_SUCCESS(!(val), msg, ##__VA_ARGS__)
+
+#define TEST_ASSERT_NOT_NULL(val, msg, ...) do { \
+		if ((val) == NULL) { \
+			printf("TestCase %s() line %d failed (null): " \
+				msg "\n", __func__, __LINE__, ##__VA_ARGS__); \
+			return TEST_FAILED;  \
+		} \
+} while (0)
+
+struct unit_test_case {
+	int (*setup)(void);
+	void (*teardown)(void);
+	int (*testcase)(void);
+	const char *name;
+};
+
+#define TEST_CASE(testcase) {NULL, NULL, testcase, #testcase}
+
+#define TEST_CASE_ST(setup, teardown, testcase) \
+		{setup, teardown, testcase, #testcase}
+
+#define TEST_CASES_END() {NULL, NULL, NULL, NULL}
+
+struct unit_test_suite {
+	const char *suite_name;
+	int (*setup)(void);
+	void (*teardown)(void);
+	struct unit_test_case unit_test_cases[];
+};
+
+int unit_test_suite_runner(struct unit_test_suite *suite);
+
+typedef int (test_callback)(void);
+TAILQ_HEAD(test_commands_list, test_command);
+struct test_command {
+	TAILQ_ENTRY(test_command) next;
+	const char *command;
+	test_callback *callback;
+};
+
+void add_test_command(struct test_command *t);
+
+/* Register a test function */
+#define REGISTER_TEST_COMMAND(name, testsuite) \
+	static int test_func_##name(void) \
+	{ \
+		return unit_test_suite_runner(&testsuite); \
+	} \
+	static struct test_command test_struct_##name = { \
+		.command = RTE_STR(name), \
+		.callback = test_func_##name, \
+	}; \
+	static void __attribute__((constructor, used)) \
+	test_register_##name(void) \
+	{ \
+		add_test_command(&test_struct_##name); \
+	}
+
+const char *get_vector_filename(void);
+
+unsigned int get_num_ops(void);
+
+unsigned int get_burst_sz(void);
+
+unsigned int get_num_lcores(void);
+
+#endif
diff --git a/app/test-bbdev/test-bbdev.py b/app/test-bbdev/test-bbdev.py
new file mode 100755
index 0000000..cf7d619
--- /dev/null
+++ b/app/test-bbdev/test-bbdev.py
@@ -0,0 +1,139 @@ 
+#!/usr/bin/env python
+
+#   BSD LICENSE
+#
+#   Copyright(c) 2017 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+import sys
+import os
+import argparse
+import subprocess
+import shlex
+
+from threading import Timer
+
+def kill(process):
+    print "ERROR: Test app timed out"
+    process.kill()
+
+if "RTE_SDK" in os.environ:
+    dpdk_path = os.environ["RTE_SDK"]
+else:
+    dpdk_path = "../.."
+
+if "RTE_TARGET" in os.environ:
+    dpdk_target = os.environ["RTE_TARGET"]
+else:
+    dpdk_target = "x86_64-native-linuxapp-gcc"
+
+parser = argparse.ArgumentParser(
+                    description='BBdev Unit Test Application',
+                    formatter_class=argparse.ArgumentDefaultsHelpFormatter)
+parser.add_argument("-p", "--testapp-path",
+                    help="specifies path to the bbdev test app",
+                    default=dpdk_path + "/" + dpdk_target + "/app/testbbdev")
+parser.add_argument("-e", "--eal-params",
+                    help="EAL arguments which are passed to the test app",
+                    default="--vdev=bbdev_null0")
+parser.add_argument("-t", "--timeout",
+                    type=int,
+                    help="Timeout in seconds",
+                    default=300)
+parser.add_argument("-c", "--test-cases",
+                    nargs="+",
+                    help="Defines test cases to run. Run all if not specified")
+parser.add_argument("-v", "--test-vector",
+                    nargs="+",
+                    help="Specifies paths to the test vector files.",
+                    default=[dpdk_path +
+                    "/app/test-bbdev/test_vectors/bbdev_vector_null.data"])
+parser.add_argument("-n", "--num-ops",
+                    type=int,
+                    help="Number of operations to process on device.",
+                    default=32)
+parser.add_argument("-b", "--burst-size",
+                    nargs="+",
+                    type=int,
+                    help="Operations enqueue/dequeue burst size.",
+                    default=[32])
+parser.add_argument("-l", "--num-lcores",
+                    type=int,
+                    help="Number of lcores to run.",
+                    default=16)
+
+args = parser.parse_args()
+
+if not os.path.exists(args.testapp_path):
+    print "No such file: " + args.testapp_path
+    sys.exit(1)
+
+params = [args.testapp_path]
+if args.eal_params:
+    params.extend(shlex.split(args.eal_params))
+
+params.extend(["--"])
+
+if args.num_ops:
+    params.extend(["-n", str(args.num_ops)])
+
+if args.num_lcores:
+    params.extend(["-l", str(args.num_lcores)])
+
+if args.test_cases:
+    params.extend(["-c"])
+    params.extend([",".join(args.test_cases)])
+
+exit_status = 0
+for vector in args.test_vector:
+    for burst_size in args.burst_size:
+        call_params = params[:]
+        call_params.extend(["-v", vector])
+        call_params.extend(["-b", str(burst_size)])
+        params_string = " ".join(call_params)
+
+        print("Executing: {}".format(params_string))
+        app_proc = subprocess.Popen(call_params)
+        if args.timeout > 0:
+            timer = Timer(args.timeout, kill, [app_proc])
+            timer.start()
+
+        try:
+            app_proc.communicate()
+        except:
+            print("Error: failed to execute: {}".format(params_string))
+        finally:
+            timer.cancel()
+
+        if app_proc.returncode != 0:
+            exit_status = 1
+            print("ERROR TestCase failed. Failed test for vector {}. Return code: {}".format(
+                vector, app_proc.returncode))
+
+sys.exit(exit_status)
diff --git a/app/test-bbdev/test_bbdev.c b/app/test-bbdev/test_bbdev.c
new file mode 100644
index 0000000..b4cd67e
--- /dev/null
+++ b/app/test-bbdev/test_bbdev.c
@@ -0,0 +1,1406 @@ 
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_cycles.h>
+
+#include <rte_bus_vdev.h>
+
+#include <rte_bbdev.h>
+#include <rte_bbdev_op.h>
+#include <rte_bbdev_pmd.h>
+
+#include "main.h"
+
+
+#define BBDEV_NAME_NULL          ("bbdev_null")
+
+struct bbdev_testsuite_params {
+	struct rte_bbdev_queue_conf qconf;
+};
+
+static struct bbdev_testsuite_params testsuite_params;
+
+static uint8_t null_dev_id;
+
+static int
+testsuite_setup(void)
+{
+	uint8_t nb_devs;
+	int ret;
+	char buf[RTE_BBDEV_NAME_MAX_LEN];
+
+	/* Create test device */
+	snprintf(buf, sizeof(buf), "%s_unittest", BBDEV_NAME_NULL);
+	ret = rte_vdev_init(buf, NULL);
+	TEST_ASSERT(ret == 0, "Failed to create instance of pmd: %s", buf);
+
+	nb_devs = rte_bbdev_count();
+	TEST_ASSERT(nb_devs != 0, "No devices found");
+
+	/* Most recently created device is our device */
+	null_dev_id = nb_devs - 1;
+
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+	char buf[RTE_BBDEV_NAME_MAX_LEN];
+
+	snprintf(buf, sizeof(buf), "%s_unittest", BBDEV_NAME_NULL);
+	rte_vdev_uninit(buf);
+}
+
+static int
+ut_setup(void)
+{
+	struct bbdev_testsuite_params *ts_params = &testsuite_params;
+	uint8_t num_queues;
+
+	/* Valid queue configuration */
+	ts_params->qconf.priority = 0;
+	ts_params->qconf.socket = SOCKET_ID_ANY;
+	ts_params->qconf.deferred_start = 1;
+
+	num_queues = 1;
+	TEST_ASSERT_SUCCESS(rte_bbdev_setup_queues(null_dev_id, num_queues,
+			SOCKET_ID_ANY), "Failed to setup queues for bbdev %u",
+			0);
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_bbdev_start(null_dev_id),
+			"Failed to start bbdev %u", 0);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	rte_bbdev_close(null_dev_id);
+}
+
+static int
+test_bbdev_configure_invalid_dev_id(void)
+{
+	uint8_t dev_id;
+	uint8_t num_queues;
+
+	num_queues = 1;
+	for (dev_id = 0; dev_id < RTE_BBDEV_MAX_DEVS; dev_id++) {
+		if (!rte_bbdev_is_valid(dev_id)) {
+			TEST_ASSERT_FAIL(rte_bbdev_setup_queues(dev_id,
+					num_queues, SOCKET_ID_ANY),
+					"Failed test for rte_bbdev_setup_queues: "
+					"invalid dev_num %u", dev_id);
+			TEST_ASSERT(rte_bbdev_intr_enable(dev_id) == -ENODEV,
+					"Failed test for rte_bbdev_intr_enable: "
+					"invalid dev_num %u", dev_id);
+			break;
+		}
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_bbdev_configure_invalid_num_queues(void)
+{
+	struct rte_bbdev_info info;
+	uint8_t dev_id, num_devs;
+	uint8_t num_queues;
+	int return_value;
+
+	TEST_ASSERT((num_devs = rte_bbdev_count()) >= 1,
+			"Need at least %d devices for test", 1);
+
+	/* valid num_queues values */
+	num_queues = 8;
+
+	/* valid dev_id values */
+	dev_id = null_dev_id;
+
+	/* Stop the device in case it's started so it can be configured */
+	rte_bbdev_stop(dev_id);
+
+	TEST_ASSERT_FAIL(rte_bbdev_setup_queues(dev_id, 0, SOCKET_ID_ANY),
+			"Failed test for rte_bbdev_setup_queues: "
+			"invalid num_queues %d", 0);
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_setup_queues(dev_id, num_queues,
+			SOCKET_ID_ANY),
+			"Failed test for rte_bbdev_setup_queues: "
+			"invalid dev_num %u", dev_id);
+
+	TEST_ASSERT_FAIL(return_value = rte_bbdev_info_get(dev_id, NULL),
+			 "Failed test for rte_bbdev_info_get: "
+			 "returned value:%i", return_value);
+
+	TEST_ASSERT_SUCCESS(return_value = rte_bbdev_info_get(dev_id, &info),
+			"Failed test for rte_bbdev_info_get: "
+			"invalid return value:%i", return_value);
+
+	TEST_ASSERT(info.num_queues == num_queues,
+			"Failed test for rte_bbdev_info_get: "
+			"invalid num_queues:%u", info.num_queues);
+
+	num_queues = info.drv.max_num_queues;
+	TEST_ASSERT_SUCCESS(rte_bbdev_setup_queues(dev_id, num_queues,
+			SOCKET_ID_ANY),
+			"Failed test for rte_bbdev_setup_queues: "
+			"invalid num_queues: %u", num_queues);
+
+	num_queues++;
+	TEST_ASSERT_FAIL(rte_bbdev_setup_queues(dev_id, num_queues,
+			SOCKET_ID_ANY),
+			"Failed test for rte_bbdev_setup_queues: "
+			"invalid num_queues: %u", num_queues);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_bbdev_configure_stop_device(void)
+{
+	struct rte_bbdev_info info;
+	uint8_t dev_id;
+	int return_value;
+
+	/* valid dev_id values */
+	dev_id = null_dev_id;
+
+	/* Stop the device so it can be configured */
+	rte_bbdev_stop(dev_id);
+
+	TEST_ASSERT_SUCCESS(return_value = rte_bbdev_info_get(dev_id, &info),
+			"Failed test for rte_bbdev_info_get: "
+			"invalid return value from "
+			"rte_bbdev_info_get function: %i", return_value);
+
+	TEST_ASSERT_SUCCESS(info.started, "Failed test for rte_bbdev_info_get: "
+			"started value: %u", info.started);
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_setup_queues(dev_id,
+			info.drv.max_num_queues, SOCKET_ID_ANY),
+			"Failed test for rte_bbdev_setup_queues: "
+			"device should be stopped, dev_id: %u", dev_id);
+
+	return_value = rte_bbdev_intr_enable(dev_id);
+	TEST_ASSERT(return_value != -EBUSY,
+			"Failed test for rte_bbdev_intr_enable: device should be stopped, dev_id: %u",
+			dev_id);
+
+	/* Start the device so it cannot be configured */
+	TEST_ASSERT_FAIL(rte_bbdev_start(RTE_BBDEV_MAX_DEVS),
+			"Failed to start bbdev %u", dev_id);
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_start(dev_id),
+			"Failed to start bbdev %u", dev_id);
+
+	TEST_ASSERT_SUCCESS(return_value = rte_bbdev_info_get(dev_id, &info),
+			"Failed test for rte_bbdev_info_get: "
+			"invalid return value from "
+			"rte_bbdev_info_get function: %i", return_value);
+
+	TEST_ASSERT_FAIL(info.started, "Failed test for rte_bbdev_info_get: "
+			"started value: %u", info.started);
+
+	TEST_ASSERT_FAIL(rte_bbdev_setup_queues(dev_id,
+			info.drv.max_num_queues, SOCKET_ID_ANY),
+			"Failed test for rte_bbdev_setup_queues: "
+			"device should be started, dev_id: %u", dev_id);
+
+	return_value = rte_bbdev_intr_enable(dev_id);
+	TEST_ASSERT(return_value == -EBUSY,
+			"Failed test for rte_bbdev_intr_enable: device should be started, dev_id: %u",
+			dev_id);
+
+	/* Stop again the device so it can be once again configured */
+	TEST_ASSERT_FAIL(rte_bbdev_stop(RTE_BBDEV_MAX_DEVS),
+			"Failed to start bbdev %u", dev_id);
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_stop(dev_id), "Failed to stop bbdev %u",
+			dev_id);
+
+	TEST_ASSERT_SUCCESS(return_value = rte_bbdev_info_get(dev_id, &info),
+			"Failed test for rte_bbdev_info_get: "
+			"invalid return value from "
+			"rte_bbdev_info_get function: %i", return_value);
+
+	TEST_ASSERT_SUCCESS(info.started, "Failed test for rte_bbdev_info_get: "
+			"started value: %u", info.started);
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_setup_queues(dev_id,
+			info.drv.max_num_queues, SOCKET_ID_ANY),
+			"Failed test for rte_bbdev_setup_queues: "
+			"device should be stopped, dev_id: %u", dev_id);
+
+	return_value = rte_bbdev_intr_enable(dev_id);
+	TEST_ASSERT(return_value != -EBUSY,
+			"Failed test for rte_bbdev_intr_enable: device should be stopped, dev_id: %u",
+			dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_bbdev_configure_stop_queue(void)
+{
+	struct bbdev_testsuite_params *ts_params = &testsuite_params;
+	struct rte_bbdev_info info;
+	struct rte_bbdev_queue_info qinfo;
+	uint8_t dev_id;
+	uint16_t queue_id;
+	int return_value;
+
+	/* Valid dev_id values */
+	dev_id = null_dev_id;
+
+	/* Valid queue_id values */
+	queue_id = 0;
+
+	rte_bbdev_stop(dev_id);
+	TEST_ASSERT_SUCCESS(return_value = rte_bbdev_info_get(dev_id, &info),
+			"Failed test for rte_bbdev_info_get: "
+			"invalid return value:%i", return_value);
+
+	/* Valid queue configuration */
+	ts_params->qconf.queue_size = info.drv.queue_size_lim;
+	ts_params->qconf.priority = info.drv.max_queue_priority;
+
+	/* Device - started; queue - started */
+	rte_bbdev_start(dev_id);
+
+	TEST_ASSERT_FAIL(rte_bbdev_queue_configure(dev_id, queue_id,
+			&ts_params->qconf),
+			"Failed test for rte_bbdev_queue_configure: "
+			"queue:%u on device:%u should be stopped",
+			 queue_id, dev_id);
+
+	/* Device - stopped; queue - started */
+	rte_bbdev_stop(dev_id);
+
+	TEST_ASSERT_FAIL(rte_bbdev_queue_configure(dev_id, queue_id,
+			&ts_params->qconf),
+			"Failed test for rte_bbdev_queue_configure: "
+			"queue:%u on device:%u should be stopped",
+			 queue_id, dev_id);
+
+	TEST_ASSERT_FAIL(rte_bbdev_queue_stop(RTE_BBDEV_MAX_DEVS, queue_id),
+			"Failed test for rte_bbdev_queue_stop "
+			"invalid dev_id ");
+
+	TEST_ASSERT_FAIL(rte_bbdev_queue_stop(dev_id, RTE_MAX_QUEUES_PER_PORT),
+			"Failed test for rte_bbdev_queue_stop "
+			"invalid queue_id ");
+
+	/* Device - stopped; queue - stopped */
+	rte_bbdev_queue_stop(dev_id, queue_id);
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_queue_configure(dev_id, queue_id,
+			&ts_params->qconf),
+			"Failed test for rte_bbdev_queue_configure: "
+			"queue:%u on device:%u should be stopped", queue_id,
+			dev_id);
+
+	TEST_ASSERT_SUCCESS(return_value = rte_bbdev_queue_info_get(dev_id,
+			queue_id, &qinfo),
+			"Failed test for rte_bbdev_info_get: "
+			"invalid return value from "
+			"rte_bbdev_queue_info_get function: %i", return_value);
+
+	TEST_ASSERT(qinfo.conf.socket == ts_params->qconf.socket,
+			"Failed test for rte_bbdev_queue_info_get: "
+			"invalid queue_size:%u", qinfo.conf.socket);
+
+	TEST_ASSERT(qinfo.conf.queue_size == ts_params->qconf.queue_size,
+			"Failed test for rte_bbdev_queue_info_get: "
+			"invalid queue_size:%u", qinfo.conf.queue_size);
+
+	TEST_ASSERT(qinfo.conf.priority == ts_params->qconf.priority,
+			"Failed test for rte_bbdev_queue_info_get: "
+			"invalid queue_size:%u", qinfo.conf.priority);
+
+	TEST_ASSERT(qinfo.conf.deferred_start ==
+			ts_params->qconf.deferred_start,
+			"Failed test for rte_bbdev_queue_info_get: "
+			"invalid queue_size:%u", qinfo.conf.deferred_start);
+
+	/* Device - started; queue - stopped */
+	rte_bbdev_start(dev_id);
+	rte_bbdev_queue_stop(dev_id, queue_id);
+
+	TEST_ASSERT_FAIL(rte_bbdev_queue_configure(dev_id, queue_id,
+			&ts_params->qconf),
+			"Failed test for rte_bbdev_queue_configure: "
+			"queue:%u on device:%u should be stopped", queue_id,
+			dev_id);
+
+	rte_bbdev_stop(dev_id);
+
+	/* After rte_bbdev_start(dev_id):
+	 * - queue should be still stopped if deferred_start ==
+	 */
+	rte_bbdev_start(dev_id);
+
+	TEST_ASSERT_SUCCESS(return_value = rte_bbdev_queue_info_get(dev_id,
+			queue_id, &qinfo),
+			"Failed test for rte_bbdev_info_get: "
+			"invalid return value from "
+			"rte_bbdev_queue_info_get function: %i", return_value);
+
+	TEST_ASSERT(qinfo.started == 0,
+			"Failed test for rte_bbdev_queue_info_get: "
+			"invalid value for qinfo.started:%u", qinfo.started);
+
+	rte_bbdev_stop(dev_id);
+
+	/* After rte_bbdev_start(dev_id):
+	 * - queue should be started if deferred_start ==
+	 */
+	ts_params->qconf.deferred_start = 0;
+	rte_bbdev_queue_configure(dev_id, queue_id, &ts_params->qconf);
+	rte_bbdev_start(dev_id);
+
+	TEST_ASSERT_SUCCESS(return_value = rte_bbdev_queue_info_get(dev_id,
+			queue_id, &qinfo),
+			"Failed test for rte_bbdev_info_get: "
+			"invalid return value from "
+			"rte_bbdev_queue_info_get function: %i", return_value);
+
+	TEST_ASSERT(qinfo.started == 1,
+			"Failed test for rte_bbdev_queue_info_get: "
+			"invalid value for qinfo.started:%u", qinfo.started);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_bbdev_configure_invalid_queue_configure(void)
+{
+	struct bbdev_testsuite_params *ts_params = &testsuite_params;
+	int return_value;
+	struct rte_bbdev_info info;
+	uint8_t dev_id;
+	uint16_t queue_id;
+
+	/* Valid dev_id values */
+	dev_id = null_dev_id;
+
+	/* Valid queue_id values */
+	queue_id = 0;
+
+	rte_bbdev_stop(dev_id);
+
+	TEST_ASSERT_SUCCESS(return_value = rte_bbdev_info_get(dev_id, &info),
+			"Failed test for rte_bbdev_info_get: "
+			"invalid return value:%i", return_value);
+
+	rte_bbdev_queue_stop(dev_id, queue_id);
+
+	ts_params->qconf.queue_size = info.drv.queue_size_lim + 1;
+	TEST_ASSERT_FAIL(rte_bbdev_queue_configure(dev_id, queue_id,
+			&ts_params->qconf),
+			"Failed test for rte_bbdev_queue_configure: "
+			"invalid value qconf.queue_size: %u",
+			ts_params->qconf.queue_size);
+
+	ts_params->qconf.queue_size = info.drv.queue_size_lim;
+	ts_params->qconf.priority = info.drv.max_queue_priority + 1;
+	TEST_ASSERT_FAIL(rte_bbdev_queue_configure(dev_id, queue_id,
+			&ts_params->qconf),
+			"Failed test for rte_bbdev_queue_configure: "
+			"invalid value qconf.queue_size: %u",
+			ts_params->qconf.queue_size);
+
+	ts_params->qconf.priority = info.drv.max_queue_priority;
+	queue_id = info.num_queues;
+	TEST_ASSERT_FAIL(rte_bbdev_queue_configure(dev_id, queue_id,
+			&ts_params->qconf),
+			"Failed test for rte_bbdev_queue_configure: "
+			"invalid value queue_id: %u", queue_id);
+
+	queue_id = 0;
+	TEST_ASSERT_SUCCESS(rte_bbdev_queue_configure(dev_id, queue_id, NULL),
+			"Failed test for rte_bbdev_queue_configure: "
+			"NULL qconf structure ");
+
+	ts_params->qconf.socket = RTE_MAX_NUMA_NODES;
+	TEST_ASSERT_FAIL(rte_bbdev_queue_configure(dev_id, queue_id,
+			&ts_params->qconf),
+			"Failed test for rte_bbdev_queue_configure: "
+			"invalid socket number ");
+
+	ts_params->qconf.socket = SOCKET_ID_ANY;
+	TEST_ASSERT_SUCCESS(rte_bbdev_queue_configure(dev_id, queue_id,
+			&ts_params->qconf),
+			"Failed test for rte_bbdev_queue_configure: "
+			"invalid value qconf.queue_size: %u",
+			ts_params->qconf.queue_size);
+
+	TEST_ASSERT_FAIL(rte_bbdev_queue_configure(RTE_BBDEV_MAX_DEVS, queue_id,
+			&ts_params->qconf),
+			"Failed test for rte_bbdev_queue_configure: "
+			"invalid dev_id");
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_queue_configure(dev_id, queue_id, NULL),
+			"Failed test for rte_bbdev_queue_configure: "
+			"invalid value qconf.queue_size: %u",
+			ts_params->qconf.queue_size);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_bbdev_op_pool(void)
+{
+	struct rte_mempool *mp;
+
+	unsigned int dec_size = sizeof(struct rte_bbdev_dec_op);
+	unsigned int enc_size = sizeof(struct rte_bbdev_enc_op);
+
+	const char *pool_dec = "Test_DEC";
+	const char *pool_enc = "Test_ENC";
+
+	/* Valid pool configuration */
+	uint32_t size = 256;
+	uint32_t cache_size = 128;
+
+	TEST_ASSERT(rte_bbdev_op_pool_create(NULL,
+			RTE_BBDEV_OP_TURBO_DEC, size, cache_size, 0) == NULL,
+			"Failed test for rte_bbdev_op_pool_create: "
+			"NULL name parameter");
+
+	TEST_ASSERT((mp = rte_bbdev_op_pool_create(pool_dec,
+			RTE_BBDEV_OP_TURBO_DEC, size, cache_size, 0)) != NULL,
+			"Failed test for rte_bbdev_op_pool_create: "
+			"returned value is empty");
+
+	TEST_ASSERT(mp->size == size,
+			"Failed test for rte_bbdev_op_pool_create: "
+			"invalid size of the mempool, mp->size: %u", mp->size);
+
+	TEST_ASSERT(mp->cache_size == cache_size,
+			"Failed test for rte_bbdev_op_pool_create: "
+			"invalid size of the mempool, mp->size: %u",
+			mp->cache_size);
+
+	TEST_ASSERT_SUCCESS(strcmp(mp->name, pool_dec),
+			"Failed test for rte_bbdev_op_pool_create: "
+			"invalid name of mempool, mp->name: %s", mp->name);
+
+	TEST_ASSERT(mp->elt_size == dec_size,
+			"Failed test for rte_bbdev_op_pool_create: "
+			"invalid element size for RTE_BBDEV_OP_TURBO_DEC, "
+			"mp->elt_size: %u", mp->elt_size);
+
+	rte_mempool_free(mp);
+
+	TEST_ASSERT((mp = rte_bbdev_op_pool_create(pool_enc,
+			RTE_BBDEV_OP_TURBO_ENC, size, cache_size, 0)) != NULL,
+			 "Failed test for rte_bbdev_op_pool_create: "
+			"returned value is empty");
+
+	TEST_ASSERT(mp->elt_size == enc_size,
+			"Failed test for rte_bbdev_op_pool_create: "
+			"invalid element size for RTE_BBDEV_OP_TURBO_ENC, "
+			"mp->elt_size: %u", mp->elt_size);
+
+	rte_mempool_free(mp);
+
+	TEST_ASSERT((mp = rte_bbdev_op_pool_create("Test_NONE",
+			RTE_BBDEV_OP_NONE, size, cache_size, 0)) != NULL,
+			"Failed test for rte_bbdev_op_pool_create: "
+			"returned value is empty for RTE_BBDEV_OP_NONE");
+
+	TEST_ASSERT(mp->elt_size == (enc_size > dec_size ? enc_size : dec_size),
+			"Failed test for rte_bbdev_op_pool_create: "
+			"invalid  size for RTE_BBDEV_OP_NONE, mp->elt_size: %u",
+			mp->elt_size);
+
+	rte_mempool_free(mp);
+
+	TEST_ASSERT((mp = rte_bbdev_op_pool_create("Test_INV",
+			RTE_BBDEV_OP_TYPE_COUNT, size, cache_size, 0)) == NULL,
+			"Failed test for rte_bbdev_op_pool_create: "
+			"returned value is not NULL for invalid type");
+
+	/* Invalid pool configuration */
+	size = 128;
+	cache_size = 256;
+
+	TEST_ASSERT((mp = rte_bbdev_op_pool_create("Test_InvSize",
+			RTE_BBDEV_OP_NONE, size, cache_size, 0)) == NULL,
+			"Failed test for rte_bbdev_op_pool_create: "
+			"returned value should be empty "
+			"because size of per-lcore local cache "
+			"is greater than size of the mempool.");
+
+	return TEST_SUCCESS;
+}
+
+/**
+ *  Create pool of OP types RTE_BBDEV_OP_NONE, RTE_BBDEV_OP_TURBO_DEC and
+ *  RTE_BBDEV_OP_TURBO_ENC and check that only ops of that type can be
+ *  allocated
+ */
+static int
+test_bbdev_op_type(void)
+{
+	struct rte_mempool *mp_dec;
+
+	const unsigned int OPS_COUNT = 32;
+	struct rte_bbdev_dec_op *dec_ops_arr[OPS_COUNT];
+	struct rte_bbdev_enc_op *enc_ops_arr[OPS_COUNT];
+
+	const char *pool_dec = "Test_op_dec";
+
+	/* Valid pool configuration */
+	uint32_t num_elements = 256;
+	uint32_t cache_size = 128;
+
+	/* mempool type : RTE_BBDEV_OP_TURBO_DEC */
+	mp_dec = rte_bbdev_op_pool_create(pool_dec,
+			RTE_BBDEV_OP_TURBO_DEC, num_elements, cache_size, 0);
+	TEST_ASSERT(mp_dec != NULL, "Failed to create %s mempool", pool_dec);
+
+	TEST_ASSERT(rte_bbdev_dec_op_alloc_bulk(mp_dec, dec_ops_arr, 1) == 0,
+			"Failed test for rte_bbdev_op_alloc_bulk TURBO_DEC: "
+			"OPs type: RTE_BBDEV_OP_TURBO_DEC");
+
+	TEST_ASSERT(rte_bbdev_enc_op_alloc_bulk(mp_dec, enc_ops_arr, 1) != 0,
+			"Failed test for rte_bbdev_op_alloc_bulk TURBO_DEC: "
+			"OPs type: RTE_BBDEV_OP_TURBO_ENC");
+
+	rte_mempool_free(mp_dec);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_bbdev_op_pool_size(void)
+{
+	struct rte_mempool *mp_none;
+
+	const unsigned int OPS_COUNT = 128;
+	struct rte_bbdev_enc_op *ops_enc_arr[OPS_COUNT];
+	struct rte_bbdev_enc_op *ops_ext_arr[OPS_COUNT];
+	struct rte_bbdev_enc_op *ops_ext2_arr[OPS_COUNT];
+
+	const char *pool_none = "Test_pool_size";
+
+	/* Valid pool configuration */
+	uint32_t num_elements = 256;
+	uint32_t cache_size = 0;
+
+	/* Create mempool type : RTE_BBDEV_OP_TURBO_ENC, size : 256 */
+	mp_none = rte_bbdev_op_pool_create(pool_none, RTE_BBDEV_OP_TURBO_ENC,
+			num_elements, cache_size, 0);
+	TEST_ASSERT(mp_none != NULL, "Failed to create %s mempool", pool_none);
+
+	/* Add 128 RTE_BBDEV_OP_TURBO_ENC ops */
+	rte_bbdev_enc_op_alloc_bulk(mp_none, ops_enc_arr, OPS_COUNT);
+
+	/* Add 128 RTE_BBDEV_OP_TURBO_ENC ops */
+	TEST_ASSERT(rte_bbdev_enc_op_alloc_bulk(mp_none, ops_ext_arr,
+			OPS_COUNT) == 0,
+			"Failed test for allocating bbdev ops: "
+			"Mempool size: 256, Free : 128, Attempted to add: 128");
+
+	/* Try adding 128 more RTE_BBDEV_OP_TURBO_ENC ops, this should fail */
+	TEST_ASSERT(rte_bbdev_enc_op_alloc_bulk(mp_none, ops_ext2_arr,
+			OPS_COUNT) != 0,
+			"Failed test for allocating bbdev ops: "
+			"Mempool size: 256, Free : 0, Attempted to add: 128");
+
+	/* Free-up 128 RTE_BBDEV_OP_TURBO_ENC ops */
+	rte_bbdev_enc_op_free_bulk(ops_enc_arr, OPS_COUNT);
+
+	/* Try adding 128 RTE_BBDEV_OP_TURBO_DEC ops, this should succeed */
+	/* Cache size > 0 causes reallocation of ops size > 127 fail */
+	TEST_ASSERT(rte_bbdev_enc_op_alloc_bulk(mp_none, ops_ext2_arr,
+			OPS_COUNT) == 0,
+			"Failed test for allocating ops after mempool freed:  "
+			"Mempool size: 256, Free : 128, Attempted to add: 128");
+
+	rte_mempool_free(mp_none);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_bbdev_count(void)
+{
+	uint8_t num_devs, num_valid_devs = 0;
+
+	for (num_devs = 0; num_devs < RTE_BBDEV_MAX_DEVS; num_devs++) {
+		if (rte_bbdev_is_valid(num_devs))
+			num_valid_devs++;
+	}
+
+	num_devs = rte_bbdev_count();
+	TEST_ASSERT(num_valid_devs == num_devs,
+			"Failed test for rte_bbdev_is_valid: "
+			"invalid num_devs %u ", num_devs);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_bbdev_stats(void)
+{
+	uint8_t dev_id = null_dev_id;
+	uint16_t queue_id = 0;
+	struct rte_bbdev_dec_op *dec_ops[4096] = { 0 };
+	struct rte_bbdev_dec_op *dec_proc_ops[4096] = { 0 };
+	struct rte_bbdev_enc_op *enc_ops[4096] = { 0 };
+	struct rte_bbdev_enc_op *enc_proc_ops[4096] = { 0 };
+	uint16_t num_ops = 236;
+	struct rte_bbdev_stats stats;
+	struct bbdev_testsuite_params *ts_params = &testsuite_params;
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_queue_stop(dev_id, queue_id),
+			"Failed to stop queue %u on device %u ", queue_id,
+			dev_id);
+	TEST_ASSERT_SUCCESS(rte_bbdev_stop(dev_id),
+			"Failed to stop bbdev %u ", dev_id);
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_queue_configure(dev_id, queue_id,
+			&ts_params->qconf),
+			"Failed to configure queue %u on device %u ",
+			queue_id, dev_id);
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_start(dev_id),
+			"Failed to start bbdev %u ", dev_id);
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_queue_start(dev_id, queue_id),
+			"Failed to start queue %u on device %u ", queue_id,
+			dev_id);
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_queue_start(dev_id, queue_id),
+			"Failed to start queue %u on device %u ", queue_id,
+			dev_id);
+
+	/* Tests after enqueue operation */
+	rte_bbdev_enqueue_enc_ops(dev_id, queue_id, enc_ops, num_ops);
+	rte_bbdev_enqueue_dec_ops(dev_id, queue_id, dec_ops, num_ops);
+
+	TEST_ASSERT_FAIL(rte_bbdev_stats_get(RTE_BBDEV_MAX_DEVS, &stats),
+			"Failed test for rte_bbdev_stats_get on device %u ",
+			dev_id);
+
+	TEST_ASSERT_FAIL(rte_bbdev_stats_get(dev_id, NULL),
+			"Failed test for rte_bbdev_stats_get on device %u ",
+			dev_id);
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_stats_get(dev_id, &stats),
+			"Failed test for rte_bbdev_stats_get on device %u ",
+			dev_id);
+
+	TEST_ASSERT(stats.enqueued_count == 2 * num_ops,
+			"Failed test for rte_bbdev_enqueue_ops: "
+			"invalid enqueued_count %" PRIu64 " ",
+			stats.enqueued_count);
+
+	TEST_ASSERT(stats.dequeued_count == 0,
+			"Failed test for rte_bbdev_stats_reset: "
+			"invalid dequeued_count %" PRIu64 " ",
+			stats.dequeued_count);
+
+	/* Tests after dequeue operation */
+	rte_bbdev_dequeue_enc_ops(dev_id, queue_id, enc_proc_ops, num_ops);
+	rte_bbdev_dequeue_dec_ops(dev_id, queue_id, dec_proc_ops, num_ops);
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_stats_get(dev_id, &stats),
+			"Failed test for rte_bbdev_stats_get on device %u ",
+			dev_id);
+
+	TEST_ASSERT(stats.dequeued_count == 2 * num_ops,
+			"Failed test for rte_bbdev_dequeue_ops: "
+			"invalid enqueued_count %" PRIu64 " ",
+			stats.dequeued_count);
+
+	TEST_ASSERT(stats.enqueue_err_count == 0,
+			"Failed test for rte_bbdev_stats_reset: "
+			"invalid enqueue_err_count %" PRIu64 " ",
+			stats.enqueue_err_count);
+
+	TEST_ASSERT(stats.dequeue_err_count == 0,
+			"Failed test for rte_bbdev_stats_reset: "
+			"invalid dequeue_err_count %" PRIu64 " ",
+			stats.dequeue_err_count);
+
+	/* Tests after reset operation */
+	TEST_ASSERT_FAIL(rte_bbdev_stats_reset(RTE_BBDEV_MAX_DEVS),
+			"Failed to reset statistic for device %u ", dev_id);
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_stats_reset(dev_id),
+			"Failed to reset statistic for device %u ", dev_id);
+	TEST_ASSERT_SUCCESS(rte_bbdev_stats_get(dev_id, &stats),
+			"Failed test for rte_bbdev_stats_get on device %u ",
+			dev_id);
+
+	TEST_ASSERT(stats.enqueued_count == 0,
+			"Failed test for rte_bbdev_stats_reset: "
+			"invalid enqueued_count %" PRIu64 " ",
+			stats.enqueued_count);
+
+	TEST_ASSERT(stats.dequeued_count == 0,
+			"Failed test for rte_bbdev_stats_reset: "
+			"invalid dequeued_count %" PRIu64 " ",
+			stats.dequeued_count);
+
+	TEST_ASSERT(stats.enqueue_err_count == 0,
+			"Failed test for rte_bbdev_stats_reset: "
+			"invalid enqueue_err_count %" PRIu64 " ",
+			stats.enqueue_err_count);
+
+	TEST_ASSERT(stats.dequeue_err_count == 0,
+			"Failed test for rte_bbdev_stats_reset: "
+			"invalid dequeue_err_count %" PRIu64 " ",
+			stats.dequeue_err_count);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_bbdev_driver_init(void)
+{
+	struct rte_bbdev *dev1, *dev2;
+	const char *name = "dev_name";
+	char name_tmp[16];
+	int num_devs, num_devs_tmp;
+
+	dev1 = rte_bbdev_allocate(NULL);
+	TEST_ASSERT(dev1 == NULL,
+			"Failed initialize bbdev driver with NULL name");
+
+	dev1 = rte_bbdev_allocate(name);
+	TEST_ASSERT(dev1 != NULL, "Failed to initialize bbdev driver");
+
+	dev2 = rte_bbdev_allocate(name);
+	TEST_ASSERT(dev2 == NULL,
+			"Failed to initialize bbdev driver: "
+			"driver with the same name has been initialized before");
+
+	num_devs = rte_bbdev_count() - 1;
+	num_devs_tmp = num_devs;
+
+	/* Initialize the maximum amount of devices */
+	do {
+		sprintf(name_tmp, "%s%i", "name_", num_devs);
+		dev2 = rte_bbdev_allocate(name_tmp);
+		TEST_ASSERT(dev2 != NULL,
+				"Failed to initialize bbdev driver");
+		++num_devs;
+	} while (num_devs < (RTE_BBDEV_MAX_DEVS - 1));
+
+	sprintf(name_tmp, "%s%i", "name_", num_devs);
+	dev2 = rte_bbdev_allocate(name_tmp);
+	TEST_ASSERT(dev2 == NULL, "Failed to initialize bbdev driver number %d "
+			"more drivers than RTE_BBDEV_MAX_DEVS: %d ", num_devs,
+			RTE_BBDEV_MAX_DEVS);
+
+	num_devs--;
+
+	while (num_devs >= num_devs_tmp) {
+		sprintf(name_tmp, "%s%i", "name_", num_devs);
+		dev2 = rte_bbdev_get_named_dev(name_tmp);
+		TEST_ASSERT_SUCCESS(rte_bbdev_release(dev2),
+				"Failed to uninitialize bbdev driver %s ",
+				name_tmp);
+		num_devs--;
+	}
+
+	TEST_ASSERT(dev1->data->dev_id < RTE_BBDEV_MAX_DEVS,
+			"Failed test rte_bbdev_allocate: "
+			"invalid dev_id %" PRIu8 ", max number of devices %d ",
+			dev1->data->dev_id, RTE_BBDEV_MAX_DEVS);
+
+	TEST_ASSERT(dev1->state == RTE_BBDEV_INITIALIZED,
+			"Failed test rte_bbdev_allocate: "
+			"invalid state %d (0 - RTE_BBDEV_UNUSED, 1 - RTE_BBDEV_INITIALIZED",
+			dev1->state);
+
+	TEST_ASSERT_FAIL(rte_bbdev_release(NULL),
+			"Failed to uninitialize bbdev driver with NULL bbdev");
+
+	sprintf(name_tmp, "%s", "invalid_name");
+	dev2 = rte_bbdev_get_named_dev(name_tmp);
+	TEST_ASSERT_FAIL(rte_bbdev_release(dev2),
+			"Failed to uninitialize bbdev driver with invalid name");
+
+	dev2 = rte_bbdev_get_named_dev(name);
+	TEST_ASSERT_SUCCESS(rte_bbdev_release(dev2),
+			"Failed to uninitialize bbdev driver: %s ", name);
+
+	return TEST_SUCCESS;
+}
+
+static void
+event_callback(uint16_t dev_id, enum rte_bbdev_event_type type, void *param,
+		void *ret_param)
+{
+	RTE_SET_USED(dev_id);
+	RTE_SET_USED(ret_param);
+
+	if (param == NULL)
+		return;
+
+	if (type == RTE_BBDEV_EVENT_UNKNOWN ||
+			type == RTE_BBDEV_EVENT_ERROR ||
+			type == RTE_BBDEV_EVENT_MAX)
+		*(int *)param = type;
+}
+
+static int
+test_bbdev_callback(void)
+{
+	struct rte_bbdev *dev1, *dev2;
+	const char *name = "dev_name1";
+	const char *name2 = "dev_name2";
+	int event_status;
+	uint8_t invalid_dev_id = 7;
+	enum rte_bbdev_event_type invalid_event_type = RTE_BBDEV_EVENT_MAX;
+	uint8_t dev_id;
+
+	dev1 = rte_bbdev_allocate(name);
+	TEST_ASSERT(dev1 != NULL, "Failed to initialize bbdev driver");
+
+	/*
+	 * RTE_BBDEV_EVENT_UNKNOWN - unregistered
+	 * RTE_BBDEV_EVENT_ERROR - unregistered
+	 */
+	event_status = -1;
+	rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+	rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_ERROR, NULL);
+	TEST_ASSERT(event_status == -1,
+			"Failed test for rte_bbdev_pmd_callback_process: "
+			"events were not registered ");
+
+	TEST_ASSERT_FAIL(rte_bbdev_callback_register(dev1->data->dev_id,
+			RTE_BBDEV_EVENT_MAX, event_callback, NULL),
+			"Failed to callback register for RTE_BBDEV_EVENT_MAX ");
+
+	TEST_ASSERT_FAIL(rte_bbdev_callback_unregister(dev1->data->dev_id,
+			RTE_BBDEV_EVENT_MAX, event_callback, NULL),
+			"Failed to unregister RTE_BBDEV_EVENT_MAX ");
+
+	/*
+	 * RTE_BBDEV_EVENT_UNKNOWN - registered
+	 * RTE_BBDEV_EVENT_ERROR - unregistered
+	 */
+	TEST_ASSERT_SUCCESS(rte_bbdev_callback_register(dev1->data->dev_id,
+			RTE_BBDEV_EVENT_UNKNOWN, event_callback, &event_status),
+			"Failed to callback rgstr for RTE_BBDEV_EVENT_UNKNOWN");
+
+	rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+	TEST_ASSERT(event_status == 0,
+			"Failed test for rte_bbdev_pmd_callback_process "
+			"for RTE_BBDEV_EVENT_UNKNOWN ");
+
+	rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_ERROR, NULL);
+	TEST_ASSERT(event_status == 0,
+			"Failed test for rte_bbdev_pmd_callback_process: "
+			"event RTE_BBDEV_EVENT_ERROR was not registered ");
+
+	/*
+	 * RTE_BBDEV_EVENT_UNKNOWN - registered
+	 * RTE_BBDEV_EVENT_ERROR - registered
+	 */
+	TEST_ASSERT_SUCCESS(rte_bbdev_callback_register(dev1->data->dev_id,
+			RTE_BBDEV_EVENT_ERROR, event_callback, &event_status),
+			"Failed to callback rgstr for RTE_BBDEV_EVENT_ERROR ");
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_callback_register(dev1->data->dev_id,
+			RTE_BBDEV_EVENT_ERROR, event_callback, &event_status),
+			"Failed to callback register for RTE_BBDEV_EVENT_ERROR"
+			"(re-registration) ");
+
+	event_status = -1;
+	rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+	TEST_ASSERT(event_status == 0,
+			"Failed test for rte_bbdev_pmd_callback_process "
+			"for RTE_BBDEV_EVENT_UNKNOWN ");
+
+	rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_ERROR, NULL);
+	TEST_ASSERT(event_status == 1,
+			"Failed test for rte_bbdev_pmd_callback_process "
+			"for RTE_BBDEV_EVENT_ERROR ");
+
+	/*
+	 * RTE_BBDEV_EVENT_UNKNOWN - registered
+	 * RTE_BBDEV_EVENT_ERROR - unregistered
+	 */
+	TEST_ASSERT_SUCCESS(rte_bbdev_callback_unregister(dev1->data->dev_id,
+			RTE_BBDEV_EVENT_ERROR, event_callback, &event_status),
+			"Failed to unregister RTE_BBDEV_EVENT_ERROR ");
+
+	event_status = -1;
+	rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+	TEST_ASSERT(event_status == 0,
+			"Failed test for rte_bbdev_pmd_callback_process "
+			"for RTE_BBDEV_EVENT_UNKNOWN ");
+
+	rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_ERROR, NULL);
+	TEST_ASSERT(event_status == 0,
+			"Failed test for rte_bbdev_pmd_callback_process: "
+			"event RTE_BBDEV_EVENT_ERROR was unregistered ");
+
+	/* rte_bbdev_callback_register with invalid inputs */
+	TEST_ASSERT_FAIL(rte_bbdev_callback_register(invalid_dev_id,
+			RTE_BBDEV_EVENT_ERROR, event_callback, &event_status),
+			"Failed test for rte_bbdev_callback_register "
+			"for invalid_dev_id ");
+
+	TEST_ASSERT_FAIL(rte_bbdev_callback_register(dev1->data->dev_id,
+			invalid_event_type, event_callback, &event_status),
+			"Failed to callback register for invalid event type ");
+
+	TEST_ASSERT_FAIL(rte_bbdev_callback_register(dev1->data->dev_id,
+			RTE_BBDEV_EVENT_ERROR, NULL, &event_status),
+			"Failed to callback register - no callback function ");
+
+	/* The impact of devices on each other */
+	dev2 = rte_bbdev_allocate(name2);
+	TEST_ASSERT(dev2 != NULL,
+			"Failed to initialize bbdev driver");
+
+	/*
+	 * dev2:
+	 * RTE_BBDEV_EVENT_UNKNOWN - unregistered
+	 * RTE_BBDEV_EVENT_ERROR - unregistered
+	 */
+	event_status = -1;
+	rte_bbdev_pmd_callback_process(dev2, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+	rte_bbdev_pmd_callback_process(dev2, RTE_BBDEV_EVENT_ERROR, NULL);
+	TEST_ASSERT(event_status == -1,
+			"Failed test for rte_bbdev_pmd_callback_process: "
+			"events were not registered ");
+
+	/*
+	 * dev1: RTE_BBDEV_EVENT_ERROR - unregistered
+	 * dev2: RTE_BBDEV_EVENT_ERROR - registered
+	 */
+	TEST_ASSERT_SUCCESS(rte_bbdev_callback_register(dev2->data->dev_id,
+			RTE_BBDEV_EVENT_ERROR, event_callback, &event_status),
+			"Failed to callback rgstr for RTE_BBDEV_EVENT_ERROR");
+
+	rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_ERROR, NULL);
+	TEST_ASSERT(event_status == -1,
+		"Failed test for rte_bbdev_pmd_callback_process in dev1 "
+		"for RTE_BBDEV_EVENT_ERROR ");
+
+	rte_bbdev_pmd_callback_process(dev2, RTE_BBDEV_EVENT_ERROR, NULL);
+	TEST_ASSERT(event_status == 1,
+		"Failed test for rte_bbdev_pmd_callback_process in dev2 "
+		"for RTE_BBDEV_EVENT_ERROR ");
+
+	/*
+	 * dev1: RTE_BBDEV_EVENT_UNKNOWN - registered
+	 * dev2: RTE_BBDEV_EVENT_UNKNOWN - unregistered
+	 */
+	TEST_ASSERT_SUCCESS(rte_bbdev_callback_register(dev2->data->dev_id,
+			RTE_BBDEV_EVENT_UNKNOWN, event_callback, &event_status),
+			"Failed to callback register for RTE_BBDEV_EVENT_UNKNOWN "
+			"in dev 2 ");
+
+	rte_bbdev_pmd_callback_process(dev2, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+	TEST_ASSERT(event_status == 0,
+			"Failed test for rte_bbdev_pmd_callback_process in dev2"
+			" for RTE_BBDEV_EVENT_UNKNOWN ");
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_callback_unregister(dev2->data->dev_id,
+			RTE_BBDEV_EVENT_UNKNOWN, event_callback, &event_status),
+			"Failed to unregister RTE_BBDEV_EVENT_UNKNOWN ");
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_callback_unregister(dev2->data->dev_id,
+			RTE_BBDEV_EVENT_UNKNOWN, event_callback, &event_status),
+			"Failed to unregister RTE_BBDEV_EVENT_UNKNOWN : "
+			"unregister function called once again ");
+
+	event_status = -1;
+	rte_bbdev_pmd_callback_process(dev2, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+	TEST_ASSERT(event_status == -1,
+			"Failed test for rte_bbdev_pmd_callback_process in dev2"
+		" for RTE_BBDEV_EVENT_UNKNOWN ");
+
+	rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+	TEST_ASSERT(event_status == 0,
+			"Failed test for rte_bbdev_pmd_callback_process in dev2 "
+			"for RTE_BBDEV_EVENT_UNKNOWN ");
+
+	/* rte_bbdev_pmd_callback_process with invalid inputs */
+	rte_bbdev_pmd_callback_process(NULL, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+
+	event_status = -1;
+	rte_bbdev_pmd_callback_process(dev1, invalid_event_type, NULL);
+	TEST_ASSERT(event_status == -1,
+			"Failed test for rte_bbdev_pmd_callback_process: "
+			"for invalid event type ");
+
+	/* rte_dev_callback_unregister with invalid inputs */
+	TEST_ASSERT_FAIL(rte_bbdev_callback_unregister(invalid_dev_id,
+			RTE_BBDEV_EVENT_UNKNOWN, event_callback, &event_status),
+			"Failed test for rte_dev_callback_unregister "
+			"for invalid_dev_id ");
+
+	TEST_ASSERT_FAIL(rte_bbdev_callback_unregister(dev1->data->dev_id,
+			invalid_event_type, event_callback, &event_status),
+			"Failed rte_dev_callback_unregister "
+			"for invalid event type ");
+
+	TEST_ASSERT_FAIL(rte_bbdev_callback_unregister(dev1->data->dev_id,
+			invalid_event_type, NULL, &event_status),
+			"Failed rte_dev_callback_unregister "
+			"when no callback function ");
+
+	dev_id = dev1->data->dev_id;
+
+	rte_bbdev_release(dev1);
+	rte_bbdev_release(dev2);
+
+	TEST_ASSERT_FAIL(rte_bbdev_callback_register(dev_id,
+			RTE_BBDEV_EVENT_ERROR, event_callback, &event_status),
+			"Failed test for rte_bbdev_callback_register: "
+			"function called after rte_bbdev_driver_uninit .");
+
+	TEST_ASSERT_FAIL(rte_bbdev_callback_unregister(dev_id,
+			RTE_BBDEV_EVENT_ERROR, event_callback, &event_status),
+			"Failed test for rte_dev_callback_unregister: "
+			"function called after rte_bbdev_driver_uninit. ");
+
+	event_status = -1;
+	rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+	rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_ERROR, NULL);
+	rte_bbdev_pmd_callback_process(dev2, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+	rte_bbdev_pmd_callback_process(dev2, RTE_BBDEV_EVENT_ERROR, NULL);
+	TEST_ASSERT(event_status == -1,
+			"Failed test for rte_bbdev_pmd_callback_process: "
+			"callback function was called after rte_bbdev_driver_uninit");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_bbdev_invalid_driver(void)
+{
+	struct rte_bbdev dev1, *dev2;
+	uint8_t dev_id = null_dev_id;
+	uint16_t queue_id = 0;
+	struct rte_bbdev_stats stats;
+	struct bbdev_testsuite_params *ts_params = &testsuite_params;
+	struct rte_bbdev_queue_info qinfo;
+	struct rte_bbdev_ops dev_ops_tmp;
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_stop(dev_id), "Failed to stop bbdev %u ",
+			dev_id);
+
+	dev1 = rte_bbdev_devices[dev_id];
+	dev2 = &rte_bbdev_devices[dev_id];
+
+	/* Tests for rte_bbdev_setup_queues */
+	dev2->dev_ops = NULL;
+	TEST_ASSERT_FAIL(rte_bbdev_setup_queues(dev_id, 1, SOCKET_ID_ANY),
+			"Failed test for rte_bbdev_setup_queues: "
+			"NULL dev_ops structure ");
+	dev2->dev_ops = dev1.dev_ops;
+
+	dev_ops_tmp = *dev2->dev_ops;
+	dev_ops_tmp.info_get = NULL;
+	dev2->dev_ops = &dev_ops_tmp;
+	TEST_ASSERT_FAIL(rte_bbdev_setup_queues(dev_id, 1, SOCKET_ID_ANY),
+			"Failed test for rte_bbdev_setup_queues: "
+			"NULL info_get ");
+	dev2->dev_ops = dev1.dev_ops;
+
+	dev_ops_tmp = *dev2->dev_ops;
+	dev_ops_tmp.queue_release = NULL;
+	dev2->dev_ops = &dev_ops_tmp;
+	TEST_ASSERT_FAIL(rte_bbdev_setup_queues(dev_id, 1, SOCKET_ID_ANY),
+			"Failed test for rte_bbdev_setup_queues: "
+			"NULL queue_release ");
+	dev2->dev_ops = dev1.dev_ops;
+
+	dev2->data->socket_id = SOCKET_ID_ANY;
+	TEST_ASSERT_SUCCESS(rte_bbdev_setup_queues(dev_id, 1,
+			SOCKET_ID_ANY), "Failed to configure bbdev %u", dev_id);
+
+	/* Test for rte_bbdev_queue_configure */
+	dev2->dev_ops = NULL;
+	TEST_ASSERT_FAIL(rte_bbdev_queue_configure(dev_id, queue_id,
+			&ts_params->qconf),
+			"Failed to configure queue %u on device %u "
+			"with NULL dev_ops structure ", queue_id, dev_id);
+	dev2->dev_ops = dev1.dev_ops;
+
+	dev_ops_tmp = *dev2->dev_ops;
+	dev_ops_tmp.queue_setup = NULL;
+	dev2->dev_ops = &dev_ops_tmp;
+	TEST_ASSERT_FAIL(rte_bbdev_queue_configure(dev_id, queue_id,
+			&ts_params->qconf),
+			"Failed to configure queue %u on device %u "
+			"with NULL queue_setup ", queue_id, dev_id);
+	dev2->dev_ops = dev1.dev_ops;
+
+	dev_ops_tmp = *dev2->dev_ops;
+	dev_ops_tmp.info_get = NULL;
+	dev2->dev_ops = &dev_ops_tmp;
+	TEST_ASSERT_FAIL(rte_bbdev_queue_configure(dev_id, queue_id,
+			&ts_params->qconf),
+			"Failed to configure queue %u on device %u "
+			"with NULL info_get ", queue_id, dev_id);
+	dev2->dev_ops = dev1.dev_ops;
+
+	TEST_ASSERT_FAIL(rte_bbdev_queue_configure(RTE_BBDEV_MAX_DEVS,
+			queue_id, &ts_params->qconf),
+			"Failed to configure queue %u on device %u ",
+			queue_id, dev_id);
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_queue_configure(dev_id, queue_id,
+			&ts_params->qconf),
+			"Failed to configure queue %u on device %u ",
+			queue_id, dev_id);
+
+	/* Test for rte_bbdev_queue_info_get */
+	dev2->dev_ops = NULL;
+	TEST_ASSERT_SUCCESS(rte_bbdev_queue_info_get(dev_id, queue_id, &qinfo),
+			"Failed test for rte_bbdev_info_get: "
+			"NULL dev_ops structure  ");
+	dev2->dev_ops = dev1.dev_ops;
+
+	TEST_ASSERT_FAIL(rte_bbdev_queue_info_get(RTE_BBDEV_MAX_DEVS,
+			queue_id, &qinfo),
+			"Failed test for rte_bbdev_info_get: "
+			"invalid dev_id ");
+
+	TEST_ASSERT_FAIL(rte_bbdev_queue_info_get(dev_id,
+			RTE_MAX_QUEUES_PER_PORT, &qinfo),
+			"Failed test for rte_bbdev_info_get: "
+			"invalid queue_id ");
+
+	TEST_ASSERT_FAIL(rte_bbdev_queue_info_get(dev_id, queue_id, NULL),
+			"Failed test for rte_bbdev_info_get: "
+			"invalid dev_info ");
+
+	/* Test for rte_bbdev_start */
+	dev2->dev_ops = NULL;
+	TEST_ASSERT_FAIL(rte_bbdev_start(dev_id),
+			"Failed to start bbdev %u "
+			"with NULL dev_ops structure ", dev_id);
+	dev2->dev_ops = dev1.dev_ops;
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_start(dev_id),
+			"Failed to start bbdev %u ", dev_id);
+
+	/* Test for rte_bbdev_queue_start */
+	dev2->dev_ops = NULL;
+	TEST_ASSERT_FAIL(rte_bbdev_queue_start(dev_id, queue_id),
+			"Failed to start queue %u on device %u: "
+			"NULL dev_ops structure", queue_id, dev_id);
+	dev2->dev_ops = dev1.dev_ops;
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_queue_start(dev_id, queue_id),
+			"Failed to start queue %u on device %u ", queue_id,
+			dev_id);
+
+	/* Tests for rte_bbdev_stats_get */
+	dev2->dev_ops = NULL;
+	TEST_ASSERT_FAIL(rte_bbdev_stats_get(dev_id, &stats),
+			"Failed test for rte_bbdev_stats_get on device %u ",
+			dev_id);
+	dev2->dev_ops = dev1.dev_ops;
+
+	dev_ops_tmp = *dev2->dev_ops;
+	dev_ops_tmp.stats_reset = NULL;
+	dev2->dev_ops = &dev_ops_tmp;
+	TEST_ASSERT_SUCCESS(rte_bbdev_stats_get(dev_id, &stats),
+			"Failed test for rte_bbdev_stats_get: "
+			"NULL stats_get ");
+	dev2->dev_ops = dev1.dev_ops;
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_stats_get(dev_id, &stats),
+			"Failed test for rte_bbdev_stats_get on device %u ",
+			dev_id);
+
+	/*
+	 * Tests for:
+	 * rte_bbdev_callback_register,
+	 * rte_bbdev_pmd_callback_process,
+	 * rte_dev_callback_unregister
+	 */
+	dev2->dev_ops = NULL;
+	TEST_ASSERT_SUCCESS(rte_bbdev_callback_register(dev_id,
+			RTE_BBDEV_EVENT_UNKNOWN, event_callback, NULL),
+			"Failed to callback rgstr for RTE_BBDEV_EVENT_UNKNOWN");
+	rte_bbdev_pmd_callback_process(dev2, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_callback_unregister(dev_id,
+			RTE_BBDEV_EVENT_UNKNOWN, event_callback, NULL),
+			"Failed to unregister RTE_BBDEV_EVENT_ERROR ");
+	dev2->dev_ops = dev1.dev_ops;
+
+	/* Tests for rte_bbdev_stats_reset */
+	dev2->dev_ops = NULL;
+	TEST_ASSERT_FAIL(rte_bbdev_stats_reset(dev_id),
+			"Failed to reset statistic for device %u ", dev_id);
+	dev2->dev_ops = dev1.dev_ops;
+
+	dev_ops_tmp = *dev2->dev_ops;
+	dev_ops_tmp.stats_reset = NULL;
+	dev2->dev_ops = &dev_ops_tmp;
+	TEST_ASSERT_SUCCESS(rte_bbdev_stats_reset(dev_id),
+			"Failed test for rte_bbdev_stats_reset: "
+			"NULL stats_reset ");
+	dev2->dev_ops = dev1.dev_ops;
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_stats_reset(dev_id),
+			"Failed to reset statistic for device %u ", dev_id);
+
+	/* Tests for rte_bbdev_queue_stop */
+	dev2->dev_ops = NULL;
+	TEST_ASSERT_FAIL(rte_bbdev_queue_stop(dev_id, queue_id),
+			"Failed to stop queue %u on device %u: "
+			"NULL dev_ops structure", queue_id, dev_id);
+	dev2->dev_ops = dev1.dev_ops;
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_queue_stop(dev_id, queue_id),
+			"Failed to stop queue %u on device %u ", queue_id,
+			dev_id);
+
+	/* Tests for rte_bbdev_stop */
+	dev2->dev_ops = NULL;
+	TEST_ASSERT_FAIL(rte_bbdev_stop(dev_id),
+			"Failed to stop bbdev %u with NULL dev_ops structure ",
+			dev_id);
+	dev2->dev_ops = dev1.dev_ops;
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_stop(dev_id),
+			"Failed to stop bbdev %u ", dev_id);
+
+	/* Tests for rte_bbdev_close */
+	TEST_ASSERT_FAIL(rte_bbdev_close(RTE_BBDEV_MAX_DEVS),
+			"Failed to close bbdev with invalid dev_id");
+
+	dev2->dev_ops = NULL;
+	TEST_ASSERT_FAIL(rte_bbdev_close(dev_id),
+			"Failed to close bbdev %u with NULL dev_ops structure ",
+			dev_id);
+	dev2->dev_ops = dev1.dev_ops;
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_close(dev_id),
+			"Failed to close bbdev %u ", dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_bbdev_get_named_dev(void)
+{
+	struct rte_bbdev *dev, *dev_tmp;
+	const char *name = "name";
+
+	dev = rte_bbdev_allocate(name);
+	TEST_ASSERT(dev != NULL, "Failed to initialize bbdev driver");
+
+	dev_tmp = rte_bbdev_get_named_dev(NULL);
+	TEST_ASSERT(dev_tmp == NULL, "Failed test for rte_bbdev_get_named_dev: "
+			"function called with NULL parameter");
+
+	dev_tmp = rte_bbdev_get_named_dev(name);
+
+	TEST_ASSERT(dev == dev_tmp, "Failed test for rte_bbdev_get_named_dev: "
+			"wrong device was returned ");
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_release(dev),
+			"Failed to uninitialize bbdev driver %s ", name);
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite bbdev_null_testsuite = {
+	.suite_name = "BBDEV NULL Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+
+		TEST_CASE(test_bbdev_configure_invalid_dev_id),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_bbdev_configure_invalid_num_queues),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_bbdev_configure_stop_device),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_bbdev_configure_stop_queue),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_bbdev_configure_invalid_queue_configure),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_bbdev_op_pool),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_bbdev_op_type),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_bbdev_op_pool_size),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_bbdev_stats),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_bbdev_driver_init),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_bbdev_callback),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_bbdev_invalid_driver),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_bbdev_get_named_dev),
+
+		TEST_CASE(test_bbdev_count),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+REGISTER_TEST_COMMAND(unittest, bbdev_null_testsuite);
diff --git a/app/test-bbdev/test_bbdev_perf.c b/app/test-bbdev/test_bbdev_perf.c
new file mode 100644
index 0000000..e46540e
--- /dev/null
+++ b/app/test-bbdev/test_bbdev_perf.c
@@ -0,0 +1,2193 @@ 
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <inttypes.h>
+
+#include <rte_eal.h>
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_launch.h>
+#include <rte_bbdev.h>
+#include <rte_cycles.h>
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_hexdump.h>
+
+#ifdef RTE_LIBRTE_PMD_TIP
+#include <rte_tip_pmd.h>
+#endif
+
+#include "main.h"
+#include "test_bbdev_vector.h"
+
+#define GET_SOCKET(socket_id) (((socket_id) == SOCKET_ID_ANY) ? 0 : (socket_id))
+
+#define MAX_QUEUES RTE_MAX_LCORE
+
+#ifdef RTE_LIBRTE_PMD_TIP
+#define TIPPF_DRIVER_NAME ("intel_tip_pf")
+#define TIPVF_DRIVER_NAME ("intel_tip_vf")
+#define CFG_FILE_NAME ("tip_config.cfg")
+#define RTE_TIP_CONFIG_FILE "RTE_TIP_CONFIG_FILE"
+#endif
+
+#define OPS_CACHE_SIZE 256U
+#define OPS_POOL_SIZE_MIN 511U /* 0.5K per queue */
+
+#define SYNC_WAIT 0
+#define SYNC_START 1
+
+#define INVALID_QUEUE_ID -1
+
+static struct test_bbdev_vector test_vector;
+
+/* Switch between PMD and Interrupt for throughput TC */
+static bool intr_enabled;
+
+/* Represents tested active devices */
+static struct active_device {
+	const char *driver_name;
+	uint8_t dev_id;
+	uint16_t supported_ops;
+	uint16_t queue_ids[MAX_QUEUES];
+	uint16_t nb_queues;
+	struct rte_mempool *ops_mempool;
+	struct rte_mempool *in_mbuf_pool;
+	struct rte_mempool *hard_out_mbuf_pool;
+	struct rte_mempool *soft_out_mbuf_pool;
+} active_devs[RTE_BBDEV_MAX_DEVS];
+
+static uint8_t nb_active_devs;
+
+/* Data buffers used by BBDEV ops */
+struct test_buffers {
+	struct rte_bbdev_op_data *inputs;
+	struct rte_bbdev_op_data *hard_outputs;
+	struct rte_bbdev_op_data *soft_outputs;
+};
+
+/* Operation parameters specific for given test case */
+struct test_op_params {
+	struct rte_mempool *mp;
+	struct rte_bbdev_dec_op *ref_dec_op;
+	struct rte_bbdev_enc_op *ref_enc_op;
+	uint16_t burst_sz;
+	uint16_t num_to_process;
+	uint16_t num_lcores;
+	int vector_mask;
+	rte_atomic16_t sync;
+	struct test_buffers q_bufs[RTE_MAX_NUMA_NODES][MAX_QUEUES];
+};
+
+/* Contains per lcore params */
+struct thread_params {
+	uint8_t dev_id;
+	uint16_t queue_id;
+	uint64_t start_time;
+	double mops;
+	double mbps;
+	rte_atomic16_t nb_dequeued;
+	rte_atomic16_t processing_status;
+	struct test_op_params *op_params;
+};
+
+typedef int (test_case_function)(struct active_device *ad,
+		struct test_op_params *op_params);
+
+static inline void
+set_avail_op(struct active_device *ad, enum rte_bbdev_op_type op_type)
+{
+	ad->supported_ops |= (1 << op_type);
+}
+
+static inline bool
+is_avail_op(struct active_device *ad, enum rte_bbdev_op_type op_type)
+{
+	return ad->supported_ops & (1 << op_type);
+}
+
+static inline bool
+flags_match(uint32_t flags_req, uint32_t flags_present)
+{
+	return (flags_req & flags_present) == flags_req;
+}
+
+static void
+clear_soft_out_cap(uint32_t *op_flags)
+{
+	*op_flags &= ~RTE_BBDEV_TURBO_SOFT_OUTPUT;
+	*op_flags &= ~RTE_BBDEV_TURBO_POS_LLR_1_BIT_SOFT_OUT;
+	*op_flags &= ~RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT;
+}
+
+static int
+check_dev_cap(const struct rte_bbdev_info *dev_info)
+{
+	unsigned int i;
+	unsigned int nb_inputs, nb_soft_outputs, nb_hard_outputs;
+	const struct rte_bbdev_op_cap *op_cap = dev_info->drv.capabilities;
+
+	nb_inputs = test_vector.entries[DATA_INPUT].nb_segments;
+	nb_soft_outputs = test_vector.entries[DATA_SOFT_OUTPUT].nb_segments;
+	nb_hard_outputs = test_vector.entries[DATA_HARD_OUTPUT].nb_segments;
+
+	for (i = 0; op_cap->type != RTE_BBDEV_OP_NONE; ++i, ++op_cap) {
+		if (op_cap->type != test_vector.op_type)
+			continue;
+
+		if (op_cap->type == RTE_BBDEV_OP_TURBO_DEC) {
+			const struct rte_bbdev_op_cap_turbo_dec *cap =
+					&op_cap->cap.turbo_dec;
+			/* Ignore lack of soft output capability, just skip
+			 * checking if soft output is valid.
+			 */
+			if ((test_vector.turbo_dec.op_flags &
+					RTE_BBDEV_TURBO_SOFT_OUTPUT) &&
+					!(cap->capability_flags &
+					RTE_BBDEV_TURBO_SOFT_OUTPUT)) {
+				printf(
+					"WARNING: Device \"%s\" does not support soft output - soft output flags will be ignored.\n",
+					dev_info->dev_name);
+				clear_soft_out_cap(
+					&test_vector.turbo_dec.op_flags);
+			}
+
+			if (!flags_match(test_vector.turbo_dec.op_flags,
+					cap->capability_flags))
+				return TEST_FAILED;
+			if (nb_inputs > cap->num_buffers_src) {
+				printf("Too many inputs defined: %u, max: %u\n",
+					nb_inputs, cap->num_buffers_src);
+				return TEST_FAILED;
+			}
+			if (nb_soft_outputs > cap->num_buffers_soft_out &&
+					(test_vector.turbo_dec.op_flags &
+					RTE_BBDEV_TURBO_SOFT_OUTPUT)) {
+				printf(
+					"Too many soft outputs defined: %u, max: %u\n",
+						nb_soft_outputs,
+						cap->num_buffers_soft_out);
+				return TEST_FAILED;
+			}
+			if (nb_hard_outputs > cap->num_buffers_hard_out) {
+				printf(
+					"Too many hard outputs defined: %u, max: %u\n",
+						nb_hard_outputs,
+						cap->num_buffers_hard_out);
+				return TEST_FAILED;
+			}
+			if (intr_enabled && !(cap->capability_flags &
+					RTE_BBDEV_TURBO_DEC_INTERRUPTS)) {
+				printf(
+					"Dequeue interrupts are not supported!\n");
+				return TEST_FAILED;
+			}
+
+			return TEST_SUCCESS;
+		} else if (op_cap->type == RTE_BBDEV_OP_TURBO_ENC) {
+			const struct rte_bbdev_op_cap_turbo_enc *cap =
+					&op_cap->cap.turbo_enc;
+
+			if (!flags_match(test_vector.turbo_enc.op_flags,
+					cap->capability_flags))
+				return TEST_FAILED;
+			if (nb_inputs > cap->num_buffers_src) {
+				printf("Too many inputs defined: %u, max: %u\n",
+					nb_inputs, cap->num_buffers_src);
+				return TEST_FAILED;
+			}
+			if (nb_hard_outputs > cap->num_buffers_dst) {
+				printf(
+					"Too many hard outputs defined: %u, max: %u\n",
+					nb_hard_outputs, cap->num_buffers_src);
+				return TEST_FAILED;
+			}
+			if (intr_enabled && !(cap->capability_flags &
+					RTE_BBDEV_TURBO_ENC_INTERRUPTS)) {
+				printf(
+					"Dequeue interrupts are not supported!\n");
+				return TEST_FAILED;
+			}
+
+			return TEST_SUCCESS;
+		}
+	}
+
+	if ((i == 0) && (test_vector.op_type == RTE_BBDEV_OP_NONE))
+		return TEST_SUCCESS; /* Special case for NULL device */
+
+	return TEST_FAILED;
+}
+
+/* calculates optimal mempool size not smaller than the val */
+static unsigned int
+optimal_mempool_size(unsigned int val)
+{
+	return rte_align32pow2(val + 1) - 1;
+}
+
+/* allocates mbuf mempool for inputs and outputs */
+static struct rte_mempool *
+create_mbuf_pool(struct op_data_entries *entries, uint8_t dev_id,
+		int socket_id, unsigned int mbuf_pool_size,
+		const char *op_type_str)
+{
+	unsigned int i;
+	uint32_t max_seg_sz = 0;
+	char pool_name[RTE_MEMPOOL_NAMESIZE];
+
+	/* find max input segment size */
+	for (i = 0; i < entries->nb_segments; ++i)
+		if (entries->segments[i].length > max_seg_sz)
+			max_seg_sz = entries->segments[i].length;
+
+	snprintf(pool_name, sizeof(pool_name), "%s_pool_%u", op_type_str,
+			dev_id);
+	return rte_pktmbuf_pool_create(pool_name, mbuf_pool_size, 0, 0,
+			RTE_MAX(max_seg_sz + RTE_PKTMBUF_HEADROOM,
+			(unsigned int)RTE_MBUF_DEFAULT_BUF_SIZE), socket_id);
+}
+
+static int
+create_mempools(struct active_device *ad, int socket_id,
+		enum rte_bbdev_op_type op_type, uint16_t num_ops)
+{
+	struct rte_mempool *mp;
+	unsigned int ops_pool_size, mbuf_pool_size = 0;
+	char pool_name[RTE_MEMPOOL_NAMESIZE];
+
+	struct op_data_entries *in = &test_vector.entries[DATA_INPUT];
+	struct op_data_entries *hard_out =
+			&test_vector.entries[DATA_HARD_OUTPUT];
+	struct op_data_entries *soft_out =
+			&test_vector.entries[DATA_SOFT_OUTPUT];
+
+	/* allocate ops mempool */
+	ops_pool_size = optimal_mempool_size(RTE_MAX(
+			/* Ops used plus 1 reference op */
+			RTE_MAX((unsigned int)(ad->nb_queues * num_ops + 1),
+			/* Minimal cache size plus 1 reference op */
+			(unsigned int)(1.5 * rte_lcore_count() *
+					OPS_CACHE_SIZE + 1)),
+			OPS_POOL_SIZE_MIN));
+	snprintf(pool_name, sizeof(pool_name), "%s_pool_%u",
+			rte_bbdev_op_type_str(op_type), ad->dev_id);
+	mp = rte_bbdev_op_pool_create(pool_name, op_type,
+			ops_pool_size, OPS_CACHE_SIZE, socket_id);
+	TEST_ASSERT_NOT_NULL(mp,
+			"ERROR Failed to create %u items ops pool for dev %u on socket %u.",
+			ops_pool_size,
+			ad->dev_id,
+			socket_id);
+	ad->ops_mempool = mp;
+
+	/* Inputs */
+	mbuf_pool_size = optimal_mempool_size(ops_pool_size * in->nb_segments);
+	mp = create_mbuf_pool(in, ad->dev_id, socket_id, mbuf_pool_size, "in");
+	TEST_ASSERT_NOT_NULL(mp,
+			"ERROR Failed to create %u items input pktmbuf pool for dev %u on socket %u.",
+			mbuf_pool_size,
+			ad->dev_id,
+			socket_id);
+	ad->in_mbuf_pool = mp;
+
+	/* Hard outputs */
+	mbuf_pool_size = optimal_mempool_size(ops_pool_size *
+			hard_out->nb_segments);
+	mp = create_mbuf_pool(hard_out, ad->dev_id, socket_id, mbuf_pool_size,
+			"hard_out");
+	TEST_ASSERT_NOT_NULL(mp,
+			"ERROR Failed to create %u items hard output pktmbuf pool for dev %u on socket %u.",
+			mbuf_pool_size,
+			ad->dev_id,
+			socket_id);
+	ad->hard_out_mbuf_pool = mp;
+
+	if (soft_out->nb_segments == 0)
+		return TEST_SUCCESS;
+
+	/* Soft outputs */
+	mbuf_pool_size = optimal_mempool_size(ops_pool_size *
+			soft_out->nb_segments);
+	mp = create_mbuf_pool(soft_out, ad->dev_id, socket_id, mbuf_pool_size,
+			"soft_out");
+	TEST_ASSERT_NOT_NULL(mp,
+			"ERROR Failed to create %uB soft output pktmbuf pool for dev %u on socket %u.",
+			mbuf_pool_size,
+			ad->dev_id,
+			socket_id);
+	ad->soft_out_mbuf_pool = mp;
+
+	return 0;
+}
+
+static int
+add_bbdev_dev(uint8_t dev_id, struct rte_bbdev_info *info,
+		struct test_bbdev_vector *vector)
+{
+	int ret;
+	unsigned int queue_id;
+	struct rte_bbdev_queue_conf qconf;
+	struct active_device *ad = &active_devs[nb_active_devs];
+	unsigned int nb_queues;
+	enum rte_bbdev_op_type op_type = vector->op_type;
+
+#ifdef RTE_LIBRTE_PMD_TIP
+	if (!strcmp(info->drv.driver_name, TIPPF_DRIVER_NAME)) {
+		struct rte_tip_conf tip_conf;
+		const char *cfg_file_name = getenv(RTE_TIP_CONFIG_FILE);
+
+		if (cfg_file_name == NULL) {
+			cfg_file_name = CFG_FILE_NAME;
+			printf(
+					"RTE_TIP_CONFIG_FILE was not set. %s will be used\n",
+					cfg_file_name);
+		}
+		ret = rte_tip_parse_conf_file(cfg_file_name, &tip_conf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to parse TIP config file");
+
+		/* setup TIP PF */
+		ret = rte_tip_configure(info->dev_name, &tip_conf);
+		TEST_ASSERT_SUCCESS(ret,
+				"Failed to configure TIP PF for TIP bbdev %s",
+				info->dev_name);
+
+		/* re-get info after it is updated */
+		rte_bbdev_info_get(dev_id, info);
+
+		/* re-check device capabilities - some options are disabled
+		 * after device configuration
+		 */
+		if (check_dev_cap(info)) {
+			printf(
+				"Device %d (%s) does not support specified capabilities\n",
+					dev_id, info->dev_name);
+			return -1;
+		}
+	}
+#endif
+
+	nb_queues = RTE_MIN(rte_lcore_count(), info->drv.max_num_queues);
+	/* setup device */
+	ret = rte_bbdev_setup_queues(dev_id, nb_queues, info->socket_id);
+	if (ret < 0) {
+		printf("rte_bbdev_setup_queues(%u, %u, %d) ret %i\n",
+				dev_id, nb_queues, info->socket_id, ret);
+		return TEST_FAILED;
+	}
+
+	/* configure interrupts if needed */
+	if (intr_enabled) {
+		ret = rte_bbdev_intr_enable(dev_id);
+		if (ret < 0) {
+			printf("rte_bbdev_intr_enable(%u) ret %i\n", dev_id,
+					ret);
+			return TEST_FAILED;
+		}
+	}
+
+	/* setup device queues */
+	qconf.socket = info->socket_id;
+	qconf.queue_size = info->drv.default_queue_conf.queue_size;
+	qconf.priority = 0;
+	qconf.deferred_start = 0;
+	qconf.op_type = op_type;
+
+	for (queue_id = 0; queue_id < nb_queues; ++queue_id) {
+		ret = rte_bbdev_queue_configure(dev_id, queue_id, &qconf);
+		if (ret != 0) {
+			printf(
+					"Allocated all queues (id=%u) at prio%u on dev%u\n",
+					queue_id, qconf.priority, dev_id);
+			qconf.priority++;
+			ret = rte_bbdev_queue_configure(ad->dev_id, queue_id,
+					&qconf);
+		}
+		if (ret != 0) {
+			printf("All queues on dev %u allocated: %u\n",
+					dev_id, queue_id);
+			break;
+		}
+		ad->queue_ids[queue_id] = queue_id;
+	}
+	TEST_ASSERT(queue_id != 0,
+			"ERROR Failed to configure any queues on dev %u",
+			dev_id);
+	ad->nb_queues = queue_id;
+
+	set_avail_op(ad, op_type);
+
+	return TEST_SUCCESS;
+}
+
+static int
+add_active_device(uint8_t dev_id, struct rte_bbdev_info *info,
+		struct test_bbdev_vector *vector)
+{
+	int ret;
+
+	active_devs[nb_active_devs].driver_name = info->drv.driver_name;
+	active_devs[nb_active_devs].dev_id = dev_id;
+
+	ret = add_bbdev_dev(dev_id, info, vector);
+	if (ret == TEST_SUCCESS)
+		++nb_active_devs;
+	return ret;
+}
+
+static uint8_t
+populate_active_devices(void)
+{
+	int ret;
+	uint8_t dev_id;
+	uint8_t nb_devs_added = 0;
+	struct rte_bbdev_info info;
+
+	RTE_BBDEV_FOREACH(dev_id) {
+		rte_bbdev_info_get(dev_id, &info);
+
+		if (check_dev_cap(&info)) {
+			printf(
+				"Device %d (%s) does not support specified capabilities\n",
+					dev_id, info.dev_name);
+			continue;
+		}
+
+		ret = add_active_device(dev_id, &info, &test_vector);
+		if (ret != 0) {
+			printf("Adding active bbdev %s skipped\n",
+					info.dev_name);
+			continue;
+		}
+		nb_devs_added++;
+	}
+
+	return nb_devs_added;
+}
+
+static int
+read_test_vector(void)
+{
+	int ret;
+
+	memset(&test_vector, 0, sizeof(test_vector));
+	printf("Test vector file = %s\n", get_vector_filename());
+	ret = test_bbdev_vector_read(get_vector_filename(), &test_vector);
+	TEST_ASSERT_SUCCESS(ret, "Failed to parse file %s\n",
+			get_vector_filename());
+
+	return TEST_SUCCESS;
+}
+
+static int
+testsuite_setup(void)
+{
+	TEST_ASSERT_SUCCESS(read_test_vector(), "Test suite setup failed\n");
+
+	if (populate_active_devices() == 0) {
+		printf("No suitable devices found!\n");
+		return TEST_SKIPPED;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+interrupt_testsuite_setup(void)
+{
+	TEST_ASSERT_SUCCESS(read_test_vector(), "Test suite setup failed\n");
+
+	/* Enable interrupts */
+	intr_enabled = true;
+
+	/* Special case for NULL device (RTE_BBDEV_OP_NONE) */
+	if (populate_active_devices() == 0 ||
+			test_vector.op_type == RTE_BBDEV_OP_NONE) {
+		intr_enabled = false;
+		printf("No suitable devices found!\n");
+		return TEST_SKIPPED;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+	uint8_t dev_id;
+
+	/* Unconfigure devices */
+	RTE_BBDEV_FOREACH(dev_id)
+		rte_bbdev_close(dev_id);
+
+	/* Clear active devices structs. */
+	memset(active_devs, 0, sizeof(active_devs));
+	nb_active_devs = 0;
+}
+
+static int
+ut_setup(void)
+{
+	uint8_t i, dev_id;
+
+	for (i = 0; i < nb_active_devs; i++) {
+		dev_id = active_devs[i].dev_id;
+		/* reset bbdev stats */
+		TEST_ASSERT_SUCCESS(rte_bbdev_stats_reset(dev_id),
+				"Failed to reset stats of bbdev %u", dev_id);
+		/* start the device */
+		TEST_ASSERT_SUCCESS(rte_bbdev_start(dev_id),
+				"Failed to start bbdev %u", dev_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	uint8_t i, dev_id;
+	struct rte_bbdev_stats stats;
+
+	for (i = 0; i < nb_active_devs; i++) {
+		dev_id = active_devs[i].dev_id;
+		/* read stats and print */
+		rte_bbdev_stats_get(dev_id, &stats);
+		/* Stop the device */
+		rte_bbdev_stop(dev_id);
+	}
+}
+
+static int
+init_op_data_objs(struct rte_bbdev_op_data *bufs,
+		struct op_data_entries *ref_entries,
+		struct rte_mempool *mbuf_pool, const uint16_t n,
+		enum op_data_type op_type, uint16_t min_alignment)
+{
+	int ret;
+	unsigned int i, j;
+
+	for (i = 0; i < n; ++i) {
+		char *data;
+		struct op_data_buf *seg = &ref_entries->segments[0];
+		struct rte_mbuf *m_head = rte_pktmbuf_alloc(mbuf_pool);
+		TEST_ASSERT_NOT_NULL(m_head,
+				"Not enough mbufs in %d data type mbuf pool (needed %u, available %u)",
+				op_type, n * ref_entries->nb_segments,
+				mbuf_pool->size);
+
+		bufs[i].data = m_head;
+		bufs[i].offset = 0;
+		bufs[i].length = 0;
+
+		if (op_type == DATA_INPUT) {
+			data = rte_pktmbuf_append(m_head, seg->length);
+			TEST_ASSERT_NOT_NULL(data,
+					"Couldn't append %u bytes to mbuf from %d data type mbuf pool",
+					seg->length, op_type);
+
+			TEST_ASSERT(data == RTE_PTR_ALIGN(data, min_alignment),
+					"Data addr in mbuf (%p) is not aligned to device min alignment (%u)",
+					data, min_alignment);
+			rte_memcpy(data, seg->addr, seg->length);
+			bufs[i].length += seg->length;
+
+
+			for (j = 1; j < ref_entries->nb_segments; ++j) {
+				struct rte_mbuf *m_tail =
+						rte_pktmbuf_alloc(mbuf_pool);
+				TEST_ASSERT_NOT_NULL(m_tail,
+						"Not enough mbufs in %d data type mbuf pool (needed %u, available %u)",
+						op_type,
+						n * ref_entries->nb_segments,
+						mbuf_pool->size);
+				seg += 1;
+
+				data = rte_pktmbuf_append(m_tail, seg->length);
+				TEST_ASSERT_NOT_NULL(data,
+						"Couldn't append %u bytes to mbuf from %d data type mbuf pool",
+						seg->length, op_type);
+
+				TEST_ASSERT(data == RTE_PTR_ALIGN(data,
+						min_alignment),
+						"Data addr in mbuf (%p) is not aligned to device min alignment (%u)",
+						data, min_alignment);
+				rte_memcpy(data, seg->addr, seg->length);
+				bufs[i].length += seg->length;
+
+				ret = rte_pktmbuf_chain(m_head, m_tail);
+				TEST_ASSERT_SUCCESS(ret,
+						"Couldn't chain mbufs from %d data type mbuf pool",
+						op_type);
+			}
+		}
+	}
+
+	return 0;
+}
+
+static int
+allocate_buffers_on_socket(struct rte_bbdev_op_data **buffers, const int len,
+		const int socket)
+{
+	int i;
+
+	*buffers = rte_zmalloc_socket(NULL, len, 0, socket);
+	if (*buffers == NULL) {
+		printf("WARNING: Failed to allocate op_data on socket %d\n",
+				socket);
+		/* try to allocate memory on other detected sockets */
+		for (i = 0; i < socket; i++) {
+			*buffers = rte_zmalloc_socket(NULL, len, 0, i);
+			if (*buffers != NULL)
+				break;
+		}
+	}
+
+	return (*buffers == NULL) ? TEST_FAILED : TEST_SUCCESS;
+}
+
+static int
+fill_queue_buffers(struct test_op_params *op_params,
+		struct rte_mempool *in_mp, struct rte_mempool *hard_out_mp,
+		struct rte_mempool *soft_out_mp, uint16_t queue_id,
+		uint16_t min_alignment, const int socket_id)
+{
+	int ret;
+	enum op_data_type type;
+	const uint16_t n = op_params->num_to_process;
+
+	struct rte_mempool *mbuf_pools[DATA_NUM_TYPES] = {
+		in_mp,
+		soft_out_mp,
+		hard_out_mp,
+	};
+
+	struct rte_bbdev_op_data **queue_ops[DATA_NUM_TYPES] = {
+		&op_params->q_bufs[socket_id][queue_id].inputs,
+		&op_params->q_bufs[socket_id][queue_id].soft_outputs,
+		&op_params->q_bufs[socket_id][queue_id].hard_outputs,
+	};
+
+	for (type = DATA_INPUT; type < DATA_NUM_TYPES; ++type) {
+		struct op_data_entries *ref_entries =
+				&test_vector.entries[type];
+		if (ref_entries->nb_segments == 0)
+			continue;
+
+		ret = allocate_buffers_on_socket(queue_ops[type],
+				n * sizeof(struct rte_bbdev_op_data),
+				socket_id);
+		TEST_ASSERT_SUCCESS(ret,
+				"Couldn't allocate memory for rte_bbdev_op_data structs");
+
+		ret = init_op_data_objs(*queue_ops[type], ref_entries,
+				mbuf_pools[type], n, type, min_alignment);
+		TEST_ASSERT_SUCCESS(ret,
+				"Couldn't init rte_bbdev_op_data structs");
+	}
+
+	return 0;
+}
+
+static void
+free_buffers(struct active_device *ad, struct test_op_params *op_params)
+{
+	unsigned int i, j;
+
+	rte_mempool_free(ad->ops_mempool);
+	rte_mempool_free(ad->in_mbuf_pool);
+	rte_mempool_free(ad->hard_out_mbuf_pool);
+	rte_mempool_free(ad->soft_out_mbuf_pool);
+
+	for (i = 0; i < rte_lcore_count(); ++i) {
+		for (j = 0; j < RTE_MAX_NUMA_NODES; ++j) {
+			rte_free(op_params->q_bufs[j][i].inputs);
+			rte_free(op_params->q_bufs[j][i].hard_outputs);
+			rte_free(op_params->q_bufs[j][i].soft_outputs);
+		}
+	}
+}
+
+static void
+copy_reference_dec_op(struct rte_bbdev_dec_op **ops, unsigned int n,
+		unsigned int start_idx,
+		struct rte_bbdev_op_data *inputs,
+		struct rte_bbdev_op_data *hard_outputs,
+		struct rte_bbdev_op_data *soft_outputs,
+		struct rte_bbdev_dec_op *ref_op)
+{
+	unsigned int i;
+	struct rte_bbdev_op_turbo_dec *turbo_dec = &ref_op->turbo_dec;
+
+	for (i = 0; i < n; ++i) {
+		if (turbo_dec->code_block_mode == 0) {
+			ops[i]->turbo_dec.tb_params.ea =
+					turbo_dec->tb_params.ea;
+			ops[i]->turbo_dec.tb_params.eb =
+					turbo_dec->tb_params.eb;
+			ops[i]->turbo_dec.tb_params.k_pos =
+					turbo_dec->tb_params.k_pos;
+			ops[i]->turbo_dec.tb_params.k_neg =
+					turbo_dec->tb_params.k_neg;
+			ops[i]->turbo_dec.tb_params.c =
+					turbo_dec->tb_params.c;
+			ops[i]->turbo_dec.tb_params.c_neg =
+					turbo_dec->tb_params.c_neg;
+			ops[i]->turbo_dec.tb_params.cab =
+					turbo_dec->tb_params.cab;
+		} else {
+			ops[i]->turbo_dec.cb_params.e = turbo_dec->cb_params.e;
+			ops[i]->turbo_dec.cb_params.k = turbo_dec->cb_params.k;
+		}
+
+		ops[i]->turbo_dec.ext_scale = turbo_dec->ext_scale;
+		ops[i]->turbo_dec.iter_max = turbo_dec->iter_max;
+		ops[i]->turbo_dec.iter_min = turbo_dec->iter_min;
+		ops[i]->turbo_dec.op_flags = turbo_dec->op_flags;
+		ops[i]->turbo_dec.rv_index = turbo_dec->rv_index;
+		ops[i]->turbo_dec.num_maps = turbo_dec->num_maps;
+		ops[i]->turbo_dec.code_block_mode = turbo_dec->code_block_mode;
+
+		ops[i]->turbo_dec.hard_output = hard_outputs[start_idx + i];
+		ops[i]->turbo_dec.input = inputs[start_idx + i];
+		if (soft_outputs != NULL)
+			ops[i]->turbo_dec.soft_output =
+				soft_outputs[start_idx + i];
+	}
+}
+
+static void
+copy_reference_enc_op(struct rte_bbdev_enc_op **ops, unsigned int n,
+		unsigned int start_idx,
+		struct rte_bbdev_op_data *inputs,
+		struct rte_bbdev_op_data *outputs,
+		struct rte_bbdev_enc_op *ref_op)
+{
+	unsigned int i;
+	struct rte_bbdev_op_turbo_enc *turbo_enc = &ref_op->turbo_enc;
+	for (i = 0; i < n; ++i) {
+		if (turbo_enc->code_block_mode == 0) {
+			ops[i]->turbo_enc.tb_params.ea =
+					turbo_enc->tb_params.ea;
+			ops[i]->turbo_enc.tb_params.eb =
+					turbo_enc->tb_params.eb;
+			ops[i]->turbo_enc.tb_params.k_pos =
+					turbo_enc->tb_params.k_pos;
+			ops[i]->turbo_enc.tb_params.k_neg =
+					turbo_enc->tb_params.k_neg;
+			ops[i]->turbo_enc.tb_params.c =
+					turbo_enc->tb_params.c;
+			ops[i]->turbo_enc.tb_params.c_neg =
+					turbo_enc->tb_params.c_neg;
+			ops[i]->turbo_enc.tb_params.cab =
+					turbo_enc->tb_params.cab;
+			ops[i]->turbo_enc.tb_params.ncb_pos =
+					turbo_enc->tb_params.ncb_pos;
+			ops[i]->turbo_enc.tb_params.ncb_neg =
+					turbo_enc->tb_params.ncb_neg;
+			ops[i]->turbo_enc.tb_params.r = turbo_enc->tb_params.r;
+		} else {
+			ops[i]->turbo_enc.cb_params.e = turbo_enc->cb_params.e;
+			ops[i]->turbo_enc.cb_params.k = turbo_enc->cb_params.k;
+			ops[i]->turbo_enc.cb_params.ncb =
+					turbo_enc->cb_params.ncb;
+		}
+		ops[i]->turbo_enc.rv_index = turbo_enc->rv_index;
+		ops[i]->turbo_enc.op_flags = turbo_enc->op_flags;
+		ops[i]->turbo_enc.code_block_mode = turbo_enc->code_block_mode;
+
+		ops[i]->turbo_enc.output = outputs[start_idx + i];
+		ops[i]->turbo_enc.input = inputs[start_idx + i];
+	}
+}
+
+static int
+check_dec_status_and_ordering(struct rte_bbdev_dec_op *op,
+		unsigned int order_idx, const int expected_status)
+{
+	TEST_ASSERT(op->status == expected_status,
+			"op_status (%d) != expected_status (%d)",
+			op->status, expected_status);
+
+	TEST_ASSERT((void *)(uintptr_t)order_idx == op->opaque_data,
+			"Ordering error, expected %p, got %p",
+			(void *)(uintptr_t)order_idx, op->opaque_data);
+
+	return TEST_SUCCESS;
+}
+
+static int
+check_enc_status_and_ordering(struct rte_bbdev_enc_op *op,
+		unsigned int order_idx, const int expected_status)
+{
+	TEST_ASSERT(op->status == expected_status,
+			"op_status (%d) != expected_status (%d)",
+			op->status, expected_status);
+
+	TEST_ASSERT((void *)(uintptr_t)order_idx == op->opaque_data,
+			"Ordering error, expected %p, got %p",
+			(void *)(uintptr_t)order_idx, op->opaque_data);
+
+	return TEST_SUCCESS;
+}
+
+static inline int
+validate_op_chain(struct rte_bbdev_op_data *op,
+		struct op_data_entries *orig_op)
+{
+	uint8_t i;
+	struct rte_mbuf *m = op->data;
+	uint8_t nb_dst_segments = orig_op->nb_segments;
+
+	TEST_ASSERT(nb_dst_segments == m->nb_segs,
+			"Number of segments differ in original (%u) and filled (%u) op",
+			nb_dst_segments, m->nb_segs);
+
+	for (i = 0; i < nb_dst_segments; ++i) {
+		/* Apply offset to the first mbuf segment */
+		uint16_t offset = (i == 0) ? op->offset : 0;
+		uint16_t data_len = m->data_len - offset;
+
+		TEST_ASSERT(orig_op->segments[i].length == data_len,
+				"Length of segment differ in original (%u) and filled (%u) op",
+				orig_op->segments[i].length, data_len);
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(orig_op->segments[i].addr,
+				rte_pktmbuf_mtod_offset(m, uint32_t *, offset),
+				data_len,
+				"Output buffers (CB=%u) are not equal", i);
+		m = m->next;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+validate_dec_buffers(struct rte_bbdev_dec_op *ref_op, struct test_buffers *bufs,
+		const uint16_t num_to_process)
+{
+	int i;
+
+	struct op_data_entries *hard_data_orig =
+			&test_vector.entries[DATA_HARD_OUTPUT];
+	struct op_data_entries *soft_data_orig =
+			&test_vector.entries[DATA_SOFT_OUTPUT];
+
+	for (i = 0; i < num_to_process; i++) {
+		TEST_ASSERT_SUCCESS(validate_op_chain(&bufs->hard_outputs[i],
+				hard_data_orig),
+				"Hard output buffers are not equal");
+		if (ref_op->turbo_dec.op_flags &
+				RTE_BBDEV_TURBO_SOFT_OUTPUT)
+			TEST_ASSERT_SUCCESS(validate_op_chain(
+					&bufs->soft_outputs[i],
+					soft_data_orig),
+					"Soft output buffers are not equal");
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+validate_enc_buffers(struct test_buffers *bufs, const uint16_t num_to_process)
+{
+	int i;
+
+	struct op_data_entries *hard_data_orig =
+			&test_vector.entries[DATA_HARD_OUTPUT];
+
+	for (i = 0; i < num_to_process; i++)
+		TEST_ASSERT_SUCCESS(validate_op_chain(&bufs->hard_outputs[i],
+				hard_data_orig), "");
+
+	return TEST_SUCCESS;
+}
+
+static int
+validate_dec_op(struct rte_bbdev_dec_op **ops, const uint16_t n,
+		struct rte_bbdev_dec_op *ref_op, const int vector_mask)
+{
+	unsigned int i;
+	int ret;
+	struct op_data_entries *hard_data_orig =
+			&test_vector.entries[DATA_HARD_OUTPUT];
+	struct op_data_entries *soft_data_orig =
+			&test_vector.entries[DATA_SOFT_OUTPUT];
+	struct rte_bbdev_op_turbo_dec *ops_td;
+	struct rte_bbdev_op_data *hard_output;
+	struct rte_bbdev_op_data *soft_output;
+	struct rte_bbdev_op_turbo_dec *ref_td = &ref_op->turbo_dec;
+
+	for (i = 0; i < n; ++i) {
+		ops_td = &ops[i]->turbo_dec;
+		hard_output = &ops_td->hard_output;
+		soft_output = &ops_td->soft_output;
+
+		if (vector_mask & TEST_BBDEV_VF_EXPECTED_ITER_COUNT)
+			TEST_ASSERT(ops_td->iter_count <= ref_td->iter_count,
+					"Returned iter_count (%d) > expected iter_count (%d)",
+					ops_td->iter_count, ref_td->iter_count);
+		ret = check_dec_status_and_ordering(ops[i], i, ref_op->status);
+		TEST_ASSERT_SUCCESS(ret,
+				"Checking status and ordering for decoder failed");
+
+		TEST_ASSERT_SUCCESS(validate_op_chain(hard_output,
+				hard_data_orig),
+				"Hard output buffers (CB=%u) are not equal",
+				i);
+
+		if (ref_op->turbo_dec.op_flags & RTE_BBDEV_TURBO_SOFT_OUTPUT)
+			TEST_ASSERT_SUCCESS(validate_op_chain(soft_output,
+					soft_data_orig),
+					"Soft output buffers (CB=%u) are not equal",
+					i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+validate_enc_op(struct rte_bbdev_enc_op **ops, const uint16_t n,
+		struct rte_bbdev_enc_op *ref_op)
+{
+	unsigned int i;
+	int ret;
+	struct op_data_entries *hard_data_orig =
+			&test_vector.entries[DATA_HARD_OUTPUT];
+
+	for (i = 0; i < n; ++i) {
+		ret = check_enc_status_and_ordering(ops[i], i, ref_op->status);
+		TEST_ASSERT_SUCCESS(ret,
+				"Checking status and ordering for encoder failed");
+		TEST_ASSERT_SUCCESS(validate_op_chain(
+				&ops[i]->turbo_enc.output,
+				hard_data_orig),
+				"Output buffers (CB=%u) are not equal",
+				i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static void
+create_reference_dec_op(struct rte_bbdev_dec_op *op)
+{
+	unsigned int i;
+	struct op_data_entries *entry;
+
+	op->turbo_dec = test_vector.turbo_dec;
+	entry = &test_vector.entries[DATA_INPUT];
+	for (i = 0; i < entry->nb_segments; ++i)
+		op->turbo_dec.input.length +=
+				entry->segments[i].length;
+}
+
+static void
+create_reference_enc_op(struct rte_bbdev_enc_op *op)
+{
+	unsigned int i;
+	struct op_data_entries *entry;
+
+	op->turbo_enc = test_vector.turbo_enc;
+	entry = &test_vector.entries[DATA_INPUT];
+	for (i = 0; i < entry->nb_segments; ++i)
+		op->turbo_enc.input.length +=
+				entry->segments[i].length;
+}
+
+static int
+init_test_op_params(struct test_op_params *op_params,
+		enum rte_bbdev_op_type op_type, const int expected_status,
+		const int vector_mask, struct rte_mempool *ops_mp,
+		uint16_t burst_sz, uint16_t num_to_process, uint16_t num_lcores)
+{
+	int ret = 0;
+	if (op_type == RTE_BBDEV_OP_TURBO_DEC)
+		ret = rte_bbdev_dec_op_alloc_bulk(ops_mp,
+				&op_params->ref_dec_op, 1);
+	else
+		ret = rte_bbdev_enc_op_alloc_bulk(ops_mp,
+				&op_params->ref_enc_op, 1);
+
+	TEST_ASSERT_SUCCESS(ret, "rte_bbdev_op_alloc_bulk() failed");
+
+	op_params->mp = ops_mp;
+	op_params->burst_sz = burst_sz;
+	op_params->num_to_process = num_to_process;
+	op_params->num_lcores = num_lcores;
+	op_params->vector_mask = vector_mask;
+	if (op_type == RTE_BBDEV_OP_TURBO_DEC)
+		op_params->ref_dec_op->status = expected_status;
+	else if (op_type == RTE_BBDEV_OP_TURBO_ENC)
+		op_params->ref_enc_op->status = expected_status;
+
+	return 0;
+}
+
+static int
+run_test_case_on_device(test_case_function *test_case_func, uint8_t dev_id,
+		struct test_op_params *op_params)
+{
+	int t_ret, f_ret, socket_id = SOCKET_ID_ANY;
+	unsigned int i;
+	struct active_device *ad;
+	unsigned int burst_sz = get_burst_sz();
+	enum rte_bbdev_op_type op_type = test_vector.op_type;
+
+	ad = &active_devs[dev_id];
+
+	/* Check if device supports op_type */
+	if (!is_avail_op(ad, test_vector.op_type))
+		return TEST_SUCCESS;
+
+	struct rte_bbdev_info info;
+	rte_bbdev_info_get(ad->dev_id, &info);
+	socket_id = GET_SOCKET(info.socket_id);
+
+	if (op_type == RTE_BBDEV_OP_NONE)
+		op_type = RTE_BBDEV_OP_TURBO_ENC;
+	f_ret = create_mempools(ad, socket_id, op_type,
+			get_num_ops());
+	if (f_ret != TEST_SUCCESS) {
+		printf("Couldn't create mempools");
+		goto fail;
+	}
+
+	f_ret = init_test_op_params(op_params, test_vector.op_type,
+			test_vector.expected_status,
+			test_vector.mask,
+			ad->ops_mempool,
+			burst_sz,
+			get_num_ops(),
+			get_num_lcores());
+	if (f_ret != TEST_SUCCESS) {
+		printf("Couldn't init test op params");
+		goto fail;
+	}
+
+	if (test_vector.op_type == RTE_BBDEV_OP_TURBO_DEC)
+		create_reference_dec_op(op_params->ref_dec_op);
+	else if (test_vector.op_type == RTE_BBDEV_OP_TURBO_ENC)
+		create_reference_enc_op(op_params->ref_enc_op);
+
+	for (i = 0; i < ad->nb_queues; ++i) {
+		f_ret = fill_queue_buffers(op_params,
+				ad->in_mbuf_pool,
+				ad->hard_out_mbuf_pool,
+				ad->soft_out_mbuf_pool,
+				ad->queue_ids[i],
+				info.drv.min_alignment,
+				socket_id);
+		if (f_ret != TEST_SUCCESS) {
+			printf("Couldn't init queue buffers");
+			goto fail;
+		}
+	}
+
+	/* Run test case function */
+	t_ret = test_case_func(ad, op_params);
+
+	/* Free active device resources and return */
+	free_buffers(ad, op_params);
+	return t_ret;
+
+fail:
+	free_buffers(ad, op_params);
+	return TEST_FAILED;
+}
+
+/* Run given test function per active device per supported op type
+ * per burst size.
+ */
+static int
+run_test_case(test_case_function *test_case_func)
+{
+	int ret = 0;
+	uint8_t dev;
+
+	/* Alloc op_params */
+	struct test_op_params *op_params = rte_zmalloc(NULL,
+			sizeof(struct test_op_params), RTE_CACHE_LINE_SIZE);
+	TEST_ASSERT_NOT_NULL(op_params, "Failed to alloc %zuB for op_params",
+			RTE_ALIGN(sizeof(struct test_op_params),
+				RTE_CACHE_LINE_SIZE));
+
+	/* For each device run test case function */
+	for (dev = 0; dev < nb_active_devs; ++dev)
+		ret |= run_test_case_on_device(test_case_func, dev, op_params);
+
+	rte_free(op_params);
+
+	return ret;
+}
+
+static void
+dequeue_event_callback(uint16_t dev_id,
+		enum rte_bbdev_event_type event, void *cb_arg,
+		void *ret_param)
+{
+	int ret;
+	uint16_t i;
+	uint64_t total_time;
+	uint16_t deq, burst_sz, num_to_process;
+	uint16_t queue_id = INVALID_QUEUE_ID;
+	struct rte_bbdev_dec_op *dec_ops[MAX_BURST];
+	struct rte_bbdev_enc_op *enc_ops[MAX_BURST];
+	struct test_buffers *bufs;
+	struct rte_bbdev_info info;
+
+	/* Input length in bytes, million operations per second,
+	 * million bits per second.
+	 */
+	double in_len;
+
+	struct thread_params *tp = cb_arg;
+
+#ifdef RTE_LIBRTE_PMD_TIP
+	struct rte_tip_deq_intr_details *intr_det = ret_param;
+	queue_id = intr_det->queue_id;
+#else
+	RTE_SET_USED(ret_param);
+#endif
+
+	/* Find matching thread params using queue_id */
+	for (i = 0; i < MAX_QUEUES; ++i, ++tp)
+		if (tp->queue_id == queue_id)
+			break;
+
+	if (i == MAX_QUEUES) {
+		printf("%s: Queue_id from interrupt details was not found!\n",
+				__func__);
+		return;
+	}
+
+	if (unlikely(event != RTE_BBDEV_EVENT_DEQUEUE)) {
+		rte_atomic16_set(&tp->processing_status, TEST_FAILED);
+		printf(
+			"Dequeue interrupt handler called for incorrect event!\n");
+		return;
+	}
+
+	burst_sz = tp->op_params->burst_sz;
+	num_to_process = tp->op_params->num_to_process;
+
+	if (test_vector.op_type == RTE_BBDEV_OP_TURBO_DEC)
+		deq = rte_bbdev_dequeue_dec_ops(dev_id, queue_id, dec_ops,
+				burst_sz);
+	else
+		deq = rte_bbdev_dequeue_enc_ops(dev_id, queue_id, enc_ops,
+				burst_sz);
+
+	if (deq < burst_sz) {
+		printf(
+			"After receiving the interrupt all operations should be dequeued. Expected: %u, got: %u\n",
+			burst_sz, deq);
+		rte_atomic16_set(&tp->processing_status, TEST_FAILED);
+		return;
+	}
+
+	if (rte_atomic16_read(&tp->nb_dequeued) + deq < num_to_process) {
+		rte_atomic16_add(&tp->nb_dequeued, deq);
+		return;
+	}
+
+	total_time = rte_rdtsc_precise() - tp->start_time;
+
+	rte_bbdev_info_get(dev_id, &info);
+
+	bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
+
+	ret = TEST_SUCCESS;
+	if (test_vector.op_type == RTE_BBDEV_OP_TURBO_DEC)
+		ret = validate_dec_buffers(tp->op_params->ref_dec_op, bufs,
+				num_to_process);
+	else if (test_vector.op_type == RTE_BBDEV_OP_TURBO_ENC)
+		ret = validate_enc_buffers(bufs, num_to_process);
+
+	if (ret) {
+		printf("Buffers validation failed\n");
+		rte_atomic16_set(&tp->processing_status, TEST_FAILED);
+	}
+
+	switch (test_vector.op_type) {
+	case RTE_BBDEV_OP_TURBO_DEC:
+		in_len = tp->op_params->ref_dec_op->turbo_dec.input.length;
+		break;
+	case RTE_BBDEV_OP_TURBO_ENC:
+		in_len = tp->op_params->ref_enc_op->turbo_enc.input.length;
+		break;
+	case RTE_BBDEV_OP_NONE:
+		in_len = 0.0;
+		break;
+	default:
+		printf("Unknown op type: %d\n", test_vector.op_type);
+		rte_atomic16_set(&tp->processing_status, TEST_FAILED);
+		return;
+	}
+
+	tp->mops = ((double)num_to_process / 1000000.0) /
+			((double)total_time / (double)rte_get_tsc_hz());
+	tp->mbps = ((double)num_to_process * in_len * 8 / 1000000.0) /
+			((double)total_time / (double)rte_get_tsc_hz());
+
+	rte_atomic16_add(&tp->nb_dequeued, deq);
+}
+
+static int
+throughput_intr_lcore_dec(void *arg)
+{
+	struct thread_params *tp = arg;
+	unsigned int enqueued;
+	struct rte_bbdev_dec_op *ops[MAX_BURST];
+	const uint16_t queue_id = tp->queue_id;
+	const uint16_t burst_sz = tp->op_params->burst_sz;
+	const uint16_t num_to_process = tp->op_params->num_to_process;
+	struct test_buffers *bufs = NULL;
+	unsigned int allocs_failed = 0;
+	struct rte_bbdev_info info;
+	int ret;
+
+	TEST_ASSERT_SUCCESS((burst_sz > MAX_BURST),
+			"BURST_SIZE should be <= %u", MAX_BURST);
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_queue_intr_enable(tp->dev_id, queue_id),
+			"Failed to enable interrupts for dev: %u, queue_id: %u",
+			tp->dev_id, queue_id);
+
+	rte_bbdev_info_get(tp->dev_id, &info);
+	bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
+
+	rte_atomic16_clear(&tp->processing_status);
+	rte_atomic16_clear(&tp->nb_dequeued);
+
+	while (rte_atomic16_read(&tp->op_params->sync) == SYNC_WAIT)
+		rte_pause();
+
+	tp->start_time = rte_rdtsc_precise();
+	for (enqueued = 0; enqueued < num_to_process;) {
+
+		uint16_t num_to_enq = burst_sz;
+
+		if (unlikely(num_to_process - enqueued < num_to_enq))
+			num_to_enq = num_to_process - enqueued;
+
+		ret = rte_bbdev_dec_op_alloc_bulk(tp->op_params->mp, ops,
+				num_to_enq);
+		if (ret != 0) {
+			allocs_failed++;
+			continue;
+		}
+
+		if (test_vector.op_type != RTE_BBDEV_OP_NONE)
+			copy_reference_dec_op(ops, num_to_enq, enqueued,
+					bufs->inputs,
+					bufs->hard_outputs,
+					bufs->soft_outputs,
+					tp->op_params->ref_dec_op);
+
+		enqueued += rte_bbdev_enqueue_dec_ops(tp->dev_id, queue_id, ops,
+				num_to_enq);
+
+		rte_bbdev_dec_op_free_bulk(ops, num_to_enq);
+	}
+
+	if (allocs_failed > 0)
+		printf("WARNING: op allocations failed: %u times\n",
+				allocs_failed);
+
+	return TEST_SUCCESS;
+}
+
+static int
+throughput_intr_lcore_enc(void *arg)
+{
+	struct thread_params *tp = arg;
+	unsigned int enqueued;
+	struct rte_bbdev_enc_op *ops[MAX_BURST];
+	const uint16_t queue_id = tp->queue_id;
+	const uint16_t burst_sz = tp->op_params->burst_sz;
+	const uint16_t num_to_process = tp->op_params->num_to_process;
+	struct test_buffers *bufs = NULL;
+	unsigned int allocs_failed = 0;
+	struct rte_bbdev_info info;
+	int ret;
+
+	TEST_ASSERT_SUCCESS((burst_sz > MAX_BURST),
+			"BURST_SIZE should be <= %u", MAX_BURST);
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_queue_intr_enable(tp->dev_id, queue_id),
+			"Failed to enable interrupts for dev: %u, queue_id: %u",
+			tp->dev_id, queue_id);
+
+	rte_bbdev_info_get(tp->dev_id, &info);
+	bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
+
+	rte_atomic16_clear(&tp->processing_status);
+	rte_atomic16_clear(&tp->nb_dequeued);
+
+	while (rte_atomic16_read(&tp->op_params->sync) == SYNC_WAIT)
+		rte_pause();
+
+	tp->start_time = rte_rdtsc_precise();
+	for (enqueued = 0; enqueued < num_to_process;) {
+
+		uint16_t num_to_enq = burst_sz;
+
+		if (unlikely(num_to_process - enqueued < num_to_enq))
+			num_to_enq = num_to_process - enqueued;
+
+		ret = rte_bbdev_enc_op_alloc_bulk(tp->op_params->mp, ops,
+				num_to_enq);
+		if (ret != 0) {
+			allocs_failed++;
+			continue;
+		}
+
+		if (test_vector.op_type != RTE_BBDEV_OP_NONE)
+			copy_reference_enc_op(ops, num_to_enq, enqueued,
+					bufs->inputs,
+					bufs->hard_outputs,
+					tp->op_params->ref_enc_op);
+
+		enqueued += rte_bbdev_enqueue_enc_ops(tp->dev_id, queue_id, ops,
+				num_to_enq);
+
+		rte_bbdev_enc_op_free_bulk(ops, num_to_enq);
+	}
+
+	if (allocs_failed > 0)
+		printf("WARNING: op allocations failed: %u times\n",
+				allocs_failed);
+
+	return TEST_SUCCESS;
+}
+
+static int
+throughput_pmd_lcore_dec(void *arg)
+{
+	struct thread_params *tp = arg;
+	unsigned int enqueued, dequeued;
+	struct rte_bbdev_dec_op *ops[MAX_BURST];
+	uint64_t total_time, start_time;
+	const uint16_t queue_id = tp->queue_id;
+	const uint16_t burst_sz = tp->op_params->burst_sz;
+	const uint16_t num_to_process = tp->op_params->num_to_process;
+	struct rte_bbdev_dec_op *ref_op = tp->op_params->ref_dec_op;
+	struct test_buffers *bufs = NULL;
+	unsigned int allocs_failed = 0;
+	int ret;
+	struct rte_bbdev_info info;
+
+	/* Input length in bytes, million operations per second, million bits
+	 * per second.
+	 */
+	double in_len;
+
+	TEST_ASSERT_SUCCESS((burst_sz > MAX_BURST),
+			"BURST_SIZE should be <= %u", MAX_BURST);
+
+	rte_bbdev_info_get(tp->dev_id, &info);
+	bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
+
+	while (rte_atomic16_read(&tp->op_params->sync) == SYNC_WAIT)
+		rte_pause();
+
+	start_time = rte_rdtsc_precise();
+	for (enqueued = 0, dequeued = 0; dequeued < num_to_process;) {
+		uint16_t deq;
+
+		if (likely(enqueued < num_to_process)) {
+
+			uint16_t num_to_enq = burst_sz;
+
+			if (unlikely(num_to_process - enqueued < num_to_enq))
+				num_to_enq = num_to_process - enqueued;
+
+			ret = rte_bbdev_dec_op_alloc_bulk(tp->op_params->mp,
+					ops, num_to_enq);
+			if (ret != 0) {
+				allocs_failed++;
+				goto do_dequeue;
+			}
+
+			if (test_vector.op_type != RTE_BBDEV_OP_NONE)
+				copy_reference_dec_op(ops, num_to_enq, enqueued,
+						bufs->inputs,
+						bufs->hard_outputs,
+						bufs->soft_outputs,
+						ref_op);
+
+			enqueued += rte_bbdev_enqueue_dec_ops(tp->dev_id,
+					queue_id, ops, num_to_enq);
+		}
+do_dequeue:
+		deq = rte_bbdev_dequeue_dec_ops(tp->dev_id, queue_id, ops,
+				burst_sz);
+		dequeued += deq;
+		rte_bbdev_dec_op_free_bulk(ops, deq);
+	}
+	total_time = rte_rdtsc_precise() - start_time;
+
+	if (allocs_failed > 0)
+		printf("WARNING: op allocations failed: %u times\n",
+				allocs_failed);
+
+	TEST_ASSERT(enqueued == dequeued, "enqueued (%u) != dequeued (%u)",
+			enqueued, dequeued);
+
+	if (test_vector.op_type != RTE_BBDEV_OP_NONE) {
+		ret = validate_dec_buffers(ref_op, bufs, num_to_process);
+		TEST_ASSERT_SUCCESS(ret, "Buffers validation failed");
+	}
+
+	in_len = ref_op->turbo_dec.input.length;
+	tp->mops = ((double)num_to_process / 1000000.0) /
+			((double)total_time / (double)rte_get_tsc_hz());
+	tp->mbps = ((double)num_to_process * in_len * 8 / 1000000.0) /
+			((double)total_time / (double)rte_get_tsc_hz());
+
+	return TEST_SUCCESS;
+}
+
+static int
+throughput_pmd_lcore_enc(void *arg)
+{
+	struct thread_params *tp = arg;
+	unsigned int enqueued, dequeued;
+	struct rte_bbdev_enc_op *ops[MAX_BURST];
+	uint64_t total_time, start_time;
+	const uint16_t queue_id = tp->queue_id;
+	const uint16_t burst_sz = tp->op_params->burst_sz;
+	const uint16_t num_to_process = tp->op_params->num_to_process;
+	struct rte_bbdev_enc_op *ref_op = tp->op_params->ref_enc_op;
+	struct test_buffers *bufs = NULL;
+	unsigned int allocs_failed = 0;
+	int ret;
+	struct rte_bbdev_info info;
+
+	/* Input length in bytes, million operations per second, million bits
+	 * per second.
+	 */
+	double in_len;
+
+	TEST_ASSERT_SUCCESS((burst_sz > MAX_BURST),
+			"BURST_SIZE should be <= %u", MAX_BURST);
+
+	rte_bbdev_info_get(tp->dev_id, &info);
+	bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
+
+	while (rte_atomic16_read(&tp->op_params->sync) == SYNC_WAIT)
+		rte_pause();
+
+	start_time = rte_rdtsc_precise();
+	for (enqueued = 0, dequeued = 0; dequeued < num_to_process;) {
+		uint16_t deq;
+
+		if (likely(enqueued < num_to_process)) {
+
+			uint16_t num_to_enq = burst_sz;
+
+			if (unlikely(num_to_process - enqueued < num_to_enq))
+				num_to_enq = num_to_process - enqueued;
+
+			ret = rte_bbdev_enc_op_alloc_bulk(tp->op_params->mp,
+					ops, num_to_enq);
+			if (ret != 0) {
+				allocs_failed++;
+				goto do_dequeue;
+			}
+
+			if (test_vector.op_type != RTE_BBDEV_OP_NONE)
+				copy_reference_enc_op(ops, num_to_enq, enqueued,
+						bufs->inputs,
+						bufs->hard_outputs,
+						ref_op);
+
+			enqueued += rte_bbdev_enqueue_enc_ops(tp->dev_id,
+					queue_id, ops, num_to_enq);
+		}
+do_dequeue:
+		deq = rte_bbdev_dequeue_enc_ops(tp->dev_id, queue_id, ops,
+				burst_sz);
+		dequeued += deq;
+		rte_bbdev_enc_op_free_bulk(ops, deq);
+	}
+	total_time = rte_rdtsc_precise() - start_time;
+
+	if (allocs_failed > 0)
+		printf("WARNING: op allocations failed: %u times\n",
+				allocs_failed);
+
+	TEST_ASSERT(enqueued == dequeued, "enqueued (%u) != dequeued (%u)",
+			enqueued, dequeued);
+
+	if (test_vector.op_type != RTE_BBDEV_OP_NONE) {
+		ret = validate_enc_buffers(bufs, num_to_process);
+		TEST_ASSERT_SUCCESS(ret, "Buffers validation failed");
+	}
+
+	in_len = ref_op->turbo_enc.input.length;
+
+	tp->mops = ((double)num_to_process / 1000000.0) /
+			((double)total_time / (double)rte_get_tsc_hz());
+	tp->mbps = ((double)num_to_process * in_len * 8 / 1000000.0) /
+			((double)total_time / (double)rte_get_tsc_hz());
+
+	return TEST_SUCCESS;
+}
+static void
+print_throughput(struct thread_params *t_params, unsigned int used_cores)
+{
+	unsigned int lcore_id, iter = 0;
+	double total_mops = 0, total_mbps = 0;
+
+	RTE_LCORE_FOREACH(lcore_id) {
+		if (iter++ >= used_cores)
+			break;
+		printf("\tlcore_id: %u, throughput: %.8lg MOPS, %.8lg Mbps\n",
+		lcore_id, t_params[lcore_id].mops, t_params[lcore_id].mbps);
+		total_mops += t_params[lcore_id].mops;
+		total_mbps += t_params[lcore_id].mbps;
+	}
+	printf(
+		"\n\tTotal stats for %u cores: throughput: %.8lg MOPS, %.8lg Mbps\n",
+		used_cores, total_mops, total_mbps);
+}
+
+/*
+ * Test function that determines how long an enqueue + dequeue of a burst
+ * takes on available lcores.
+ */
+static int
+throughput_test(struct active_device *ad,
+		struct test_op_params *op_params)
+{
+	int ret;
+	unsigned int lcore_id, used_cores = 0;
+	struct thread_params t_params[MAX_QUEUES];
+	struct rte_bbdev_info info;
+	lcore_function_t *throughput_function;
+	struct thread_params *tp;
+	uint16_t num_lcores;
+
+	rte_bbdev_info_get(ad->dev_id, &info);
+
+	printf(
+		"Throughput test: dev: %s, nb_queues: %u, burst size: %u, num ops: %u, num_lcores: %u, op type: %s, int mode: %s, GHz: %lg\n",
+			info.dev_name, ad->nb_queues, op_params->burst_sz,
+			op_params->num_to_process, op_params->num_lcores,
+			rte_bbdev_op_type_str(test_vector.op_type),
+			intr_enabled ? "Interrupt mode" : "PMD mode",
+			(double)rte_get_tsc_hz() / 1000000000.0);
+
+	/* Set number of lcores */
+	num_lcores = (ad->nb_queues < (op_params->num_lcores))
+			? ad->nb_queues
+			: op_params->num_lcores;
+
+	if (intr_enabled) {
+		if (test_vector.op_type == RTE_BBDEV_OP_TURBO_DEC)
+			throughput_function = throughput_intr_lcore_dec;
+		else
+			throughput_function = throughput_intr_lcore_enc;
+
+		/* Dequeue interrupt callback registration */
+		rte_bbdev_callback_register(ad->dev_id, RTE_BBDEV_EVENT_DEQUEUE,
+				dequeue_event_callback,
+				&t_params);
+	} else {
+		if (test_vector.op_type == RTE_BBDEV_OP_TURBO_DEC)
+			throughput_function = throughput_pmd_lcore_dec;
+		else
+			throughput_function = throughput_pmd_lcore_enc;
+	}
+
+	rte_atomic16_set(&op_params->sync, SYNC_WAIT);
+
+	t_params[rte_lcore_id()].dev_id = ad->dev_id;
+	t_params[rte_lcore_id()].op_params = op_params;
+	t_params[rte_lcore_id()].queue_id =
+			ad->queue_ids[used_cores++];
+
+	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+		if (used_cores >= num_lcores)
+			break;
+
+		t_params[lcore_id].dev_id = ad->dev_id;
+		t_params[lcore_id].op_params = op_params;
+		t_params[lcore_id].queue_id = ad->queue_ids[used_cores++];
+
+		rte_eal_remote_launch(throughput_function, &t_params[lcore_id],
+				lcore_id);
+	}
+
+	rte_atomic16_set(&op_params->sync, SYNC_START);
+	ret = throughput_function(&t_params[rte_lcore_id()]);
+
+	/* Master core is always used */
+	used_cores = 1;
+	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+		if (used_cores++ >= num_lcores)
+			break;
+
+		ret |= rte_eal_wait_lcore(lcore_id);
+	}
+
+	/* Return if test failed */
+	if (ret)
+		return ret;
+
+	/* Print throughput if interrupts are disabled and test passed */
+	if (!intr_enabled) {
+		if (test_vector.op_type != RTE_BBDEV_OP_NONE)
+			print_throughput(t_params, num_lcores);
+		return ret;
+	}
+
+	/* In interrupt TC we need to wait for the interrupt callback to deqeue
+	 * all pending operations. Skip waiting for queues which reported an
+	 * error using processing_status variable.
+	 * Wait for master lcore operations.
+	 */
+	tp = &t_params[rte_lcore_id()];
+	while ((rte_atomic16_read(&tp->nb_dequeued) <
+			op_params->num_to_process) &&
+			(rte_atomic16_read(&tp->processing_status) !=
+			TEST_FAILED))
+		rte_pause();
+
+	ret |= rte_atomic16_read(&tp->processing_status);
+
+	/* Wait for slave lcores operations */
+	used_cores = 1;
+	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+		tp = &t_params[lcore_id];
+		if (used_cores++ >= num_lcores)
+			break;
+
+		while ((rte_atomic16_read(&tp->nb_dequeued) <
+				op_params->num_to_process) &&
+				(rte_atomic16_read(&tp->processing_status) !=
+				TEST_FAILED))
+			rte_pause();
+
+		ret |= rte_atomic16_read(&tp->processing_status);
+	}
+
+	/* Print throughput if test passed */
+	if (!ret && test_vector.op_type != RTE_BBDEV_OP_NONE)
+		print_throughput(t_params, num_lcores);
+
+	return ret;
+}
+
+static int
+operation_latency_test_dec(struct rte_mempool *mempool,
+		struct test_buffers *bufs, struct rte_bbdev_dec_op *ref_op,
+		int vector_mask, uint16_t dev_id, uint16_t queue_id,
+		const uint16_t num_to_process, uint16_t burst_sz,
+		uint64_t *total_time)
+{
+	int ret = TEST_SUCCESS;
+	uint16_t i, j, dequeued;
+	struct rte_bbdev_dec_op *ops[MAX_BURST];
+	uint64_t start_time = 0;
+
+	for (i = 0, dequeued = 0; dequeued < num_to_process; ++i) {
+		uint16_t enq = 0, deq = 0;
+		bool first_time = true;
+
+		if (unlikely(num_to_process - dequeued < burst_sz))
+			burst_sz = num_to_process - dequeued;
+
+		rte_bbdev_dec_op_alloc_bulk(mempool, ops, burst_sz);
+		if (test_vector.op_type != RTE_BBDEV_OP_NONE)
+			copy_reference_dec_op(ops, burst_sz, dequeued,
+					bufs->inputs,
+					bufs->hard_outputs,
+					bufs->soft_outputs,
+					ref_op);
+
+		/* Set counter to validate the ordering */
+		for (j = 0; j < burst_sz; ++j)
+			ops[j]->opaque_data = (void *)(uintptr_t)j;
+
+		start_time = rte_rdtsc_precise();
+
+		enq = rte_bbdev_enqueue_dec_ops(dev_id, queue_id, &ops[enq],
+				burst_sz);
+		TEST_ASSERT(enq == burst_sz,
+				"Error enqueueing burst, expected %u, got %u",
+				burst_sz, enq);
+
+		/* Dequeue */
+		do {
+			deq += rte_bbdev_dequeue_dec_ops(dev_id, queue_id,
+					&ops[deq], burst_sz - deq);
+			if (likely(first_time && (deq > 0))) {
+				*total_time += rte_rdtsc_precise() - start_time;
+				first_time = false;
+			}
+		} while (unlikely(burst_sz != deq));
+
+		if (test_vector.op_type != RTE_BBDEV_OP_NONE) {
+			ret = validate_dec_op(ops, burst_sz, ref_op,
+					vector_mask);
+			TEST_ASSERT_SUCCESS(ret, "Validation failed!");
+		}
+
+		rte_bbdev_dec_op_free_bulk(ops, deq);
+		dequeued += deq;
+	}
+
+	return i;
+}
+
+static int
+operation_latency_test_enc(struct rte_mempool *mempool,
+		struct test_buffers *bufs, struct rte_bbdev_enc_op *ref_op,
+		uint16_t dev_id, uint16_t queue_id,
+		const uint16_t num_to_process, uint16_t burst_sz,
+		uint64_t *total_time)
+{
+	int ret = TEST_SUCCESS;
+	uint16_t i, j, dequeued;
+	struct rte_bbdev_enc_op *ops[MAX_BURST];
+	uint64_t start_time = 0;
+
+	for (i = 0, dequeued = 0; dequeued < num_to_process; ++i) {
+		uint16_t enq = 0, deq = 0;
+		bool first_time = true;
+
+		if (unlikely(num_to_process - dequeued < burst_sz))
+			burst_sz = num_to_process - dequeued;
+
+		rte_bbdev_enc_op_alloc_bulk(mempool, ops, burst_sz);
+		if (test_vector.op_type != RTE_BBDEV_OP_NONE)
+			copy_reference_enc_op(ops, burst_sz, dequeued,
+					bufs->inputs,
+					bufs->hard_outputs,
+					ref_op);
+
+		/* Set counter to validate the ordering */
+		for (j = 0; j < burst_sz; ++j)
+			ops[j]->opaque_data = (void *)(uintptr_t)j;
+
+		start_time = rte_rdtsc_precise();
+
+		enq = rte_bbdev_enqueue_enc_ops(dev_id, queue_id, &ops[enq],
+				burst_sz);
+		TEST_ASSERT(enq == burst_sz,
+				"Error enqueueing burst, expected %u, got %u",
+				burst_sz, enq);
+
+		/* Dequeue */
+		do {
+			deq += rte_bbdev_dequeue_enc_ops(dev_id, queue_id,
+					&ops[deq], burst_sz - deq);
+			if (likely(first_time && (deq > 0))) {
+				*total_time += rte_rdtsc_precise() - start_time;
+				first_time = false;
+			}
+		} while (unlikely(burst_sz != deq));
+
+		if (test_vector.op_type != RTE_BBDEV_OP_NONE) {
+			ret = validate_enc_op(ops, burst_sz, ref_op);
+			TEST_ASSERT_SUCCESS(ret, "Validation failed!");
+		}
+
+		rte_bbdev_enc_op_free_bulk(ops, deq);
+		dequeued += deq;
+	}
+
+	return i;
+}
+
+static int
+operation_latency_test(struct active_device *ad,
+		struct test_op_params *op_params)
+{
+	int iter;
+	uint16_t burst_sz = op_params->burst_sz;
+	const uint16_t num_to_process = op_params->num_to_process;
+	const enum rte_bbdev_op_type op_type = test_vector.op_type;
+	const uint16_t queue_id = ad->queue_ids[0];
+	struct test_buffers *bufs = NULL;
+	struct rte_bbdev_info info;
+	uint64_t total_time = 0;
+
+	TEST_ASSERT_SUCCESS((burst_sz > MAX_BURST),
+			"BURST_SIZE should be <= %u", MAX_BURST);
+
+	rte_bbdev_info_get(ad->dev_id, &info);
+	bufs = &op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
+
+	printf(
+		"Validation/Latency test: dev: %s, burst size: %u, num ops: %u, op type: %s\n",
+			info.dev_name, burst_sz, num_to_process,
+			rte_bbdev_op_type_str(op_type));
+
+	if (op_type == RTE_BBDEV_OP_TURBO_DEC)
+		iter = operation_latency_test_dec(op_params->mp, bufs,
+				op_params->ref_dec_op, op_params->vector_mask,
+				ad->dev_id, queue_id, num_to_process,
+				burst_sz, &total_time);
+	else
+		iter = operation_latency_test_enc(op_params->mp, bufs,
+				op_params->ref_enc_op, ad->dev_id, queue_id,
+				num_to_process, burst_sz, &total_time);
+
+	if (iter < 0)
+		return TEST_FAILED;
+
+	printf("\toperation avg. latency: %lg cycles, %lg us\n",
+			(double)total_time / (double)iter,
+			(double)(total_time * 1000000) / (double)iter /
+			(double)rte_get_tsc_hz());
+
+	return TEST_SUCCESS;
+}
+
+static int
+offload_latency_test_dec(struct rte_mempool *mempool, struct test_buffers *bufs,
+		struct rte_bbdev_dec_op *ref_op, uint16_t dev_id,
+		uint16_t queue_id, const uint16_t num_to_process,
+		uint16_t burst_sz, uint64_t *enq_total_time,
+		uint64_t *deq_total_time)
+{
+	int i, dequeued;
+	struct rte_bbdev_dec_op *ops[MAX_BURST];
+	uint64_t enq_start_time, deq_start_time;
+
+	for (i = 0, dequeued = 0; dequeued < num_to_process; ++i) {
+		uint16_t enq = 0, deq = 0;
+
+		if (unlikely(num_to_process - dequeued < burst_sz))
+			burst_sz = num_to_process - dequeued;
+
+		rte_bbdev_dec_op_alloc_bulk(mempool, ops, burst_sz);
+		if (test_vector.op_type != RTE_BBDEV_OP_NONE)
+			copy_reference_dec_op(ops, burst_sz, dequeued,
+					bufs->inputs,
+					bufs->hard_outputs,
+					bufs->soft_outputs,
+					ref_op);
+
+		/* Start time measurment for enqueue function offload latency */
+		enq_start_time = rte_rdtsc();
+		do {
+			enq += rte_bbdev_enqueue_dec_ops(dev_id, queue_id,
+					&ops[enq], burst_sz - enq);
+		} while (unlikely(burst_sz != enq));
+		*enq_total_time += rte_rdtsc() - enq_start_time;
+
+		/* ensure enqueue has been completed */
+		rte_delay_ms(10);
+
+		/* Start time measurment for dequeue function offload latency */
+		deq_start_time = rte_rdtsc();
+		do {
+			deq += rte_bbdev_dequeue_dec_ops(dev_id, queue_id,
+					&ops[deq], burst_sz - deq);
+		} while (unlikely(burst_sz != deq));
+		*deq_total_time += rte_rdtsc() - deq_start_time;
+
+		rte_bbdev_dec_op_free_bulk(ops, deq);
+		dequeued += deq;
+	}
+
+	return i;
+}
+
+static int
+offload_latency_test_enc(struct rte_mempool *mempool, struct test_buffers *bufs,
+		struct rte_bbdev_enc_op *ref_op, uint16_t dev_id,
+		uint16_t queue_id, const uint16_t num_to_process,
+		uint16_t burst_sz, uint64_t *enq_total_time,
+		uint64_t *deq_total_time)
+{
+	int i, dequeued;
+	struct rte_bbdev_enc_op *ops[MAX_BURST];
+	uint64_t enq_start_time, deq_start_time;
+
+	for (i = 0, dequeued = 0; dequeued < num_to_process; ++i) {
+		uint16_t enq = 0, deq = 0;
+
+		if (unlikely(num_to_process - dequeued < burst_sz))
+			burst_sz = num_to_process - dequeued;
+
+		rte_bbdev_enc_op_alloc_bulk(mempool, ops, burst_sz);
+		if (test_vector.op_type != RTE_BBDEV_OP_NONE)
+			copy_reference_enc_op(ops, burst_sz, dequeued,
+					bufs->inputs,
+					bufs->hard_outputs,
+					ref_op);
+
+		/* Start time measurment for enqueue function offload latency */
+		enq_start_time = rte_rdtsc();
+		do {
+			enq += rte_bbdev_enqueue_enc_ops(dev_id, queue_id,
+					&ops[enq], burst_sz - enq);
+		} while (unlikely(burst_sz != enq));
+		*enq_total_time += rte_rdtsc() - enq_start_time;
+
+		/* ensure enqueue has been completed */
+		rte_delay_ms(10);
+
+		/* Start time measurment for dequeue function offload latency */
+		deq_start_time = rte_rdtsc();
+		do {
+			deq += rte_bbdev_dequeue_enc_ops(dev_id, queue_id,
+					&ops[deq], burst_sz - deq);
+		} while (unlikely(burst_sz != deq));
+		*deq_total_time += rte_rdtsc() - deq_start_time;
+
+		rte_bbdev_enc_op_free_bulk(ops, deq);
+		dequeued += deq;
+	}
+
+	return i;
+}
+
+static int
+offload_latency_test(struct active_device *ad,
+		struct test_op_params *op_params)
+{
+	int iter;
+	uint64_t enq_total_time = 0, deq_total_time = 0;
+	uint16_t burst_sz = op_params->burst_sz;
+	const uint16_t num_to_process = op_params->num_to_process;
+	const enum rte_bbdev_op_type op_type = test_vector.op_type;
+	const uint16_t queue_id = ad->queue_ids[0];
+	struct test_buffers *bufs = NULL;
+	struct rte_bbdev_info info;
+
+	TEST_ASSERT_SUCCESS((burst_sz > MAX_BURST),
+			"BURST_SIZE should be <= %u", MAX_BURST);
+
+	rte_bbdev_info_get(ad->dev_id, &info);
+	bufs = &op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
+
+	printf(
+		"Offload latency test: dev: %s, burst size: %u, num ops: %u, op type: %s\n",
+			info.dev_name, burst_sz, num_to_process,
+			rte_bbdev_op_type_str(op_type));
+
+	if (op_type == RTE_BBDEV_OP_TURBO_DEC)
+		iter = offload_latency_test_dec(op_params->mp, bufs,
+				op_params->ref_dec_op, ad->dev_id, queue_id,
+				num_to_process, burst_sz, &enq_total_time,
+				&deq_total_time);
+	else
+		iter = offload_latency_test_enc(op_params->mp, bufs,
+				op_params->ref_enc_op, ad->dev_id, queue_id,
+				num_to_process, burst_sz, &enq_total_time,
+				&deq_total_time);
+
+	if (iter < 0)
+		return TEST_FAILED;
+
+	printf("\tenq offload avg. latency: %lg cycles, %lg us\n",
+			(double)enq_total_time / (double)iter,
+			(double)(enq_total_time * 1000000) / (double)iter /
+			(double)rte_get_tsc_hz());
+
+	printf("\tdeq offload avg. latency: %lg cycles, %lg us\n",
+			(double)deq_total_time / (double)iter,
+			(double)(deq_total_time * 1000000) / (double)iter /
+			(double)rte_get_tsc_hz());
+
+	return TEST_SUCCESS;
+}
+
+static int
+offload_latency_empty_q_test_dec(uint16_t dev_id, uint16_t queue_id,
+		const uint16_t num_to_process, uint16_t burst_sz,
+		uint64_t *deq_total_time)
+{
+	int i, deq_total;
+	struct rte_bbdev_dec_op *ops[MAX_BURST];
+	uint64_t deq_start_time;
+
+	/* Test deq offload latency from an empty queue */
+	deq_start_time = rte_rdtsc_precise();
+	for (i = 0, deq_total = 0; deq_total < num_to_process;
+			++i, deq_total += burst_sz) {
+		if (unlikely(num_to_process - deq_total < burst_sz))
+			burst_sz = num_to_process - deq_total;
+		rte_bbdev_dequeue_dec_ops(dev_id, queue_id, ops, burst_sz);
+	}
+	*deq_total_time = rte_rdtsc_precise() - deq_start_time;
+
+	return i;
+}
+
+static int
+offload_latency_empty_q_test_enc(uint16_t dev_id, uint16_t queue_id,
+		const uint16_t num_to_process, uint16_t burst_sz,
+		uint64_t *deq_total_time)
+{
+	int i, deq_total;
+	struct rte_bbdev_enc_op *ops[MAX_BURST];
+	uint64_t deq_start_time;
+
+	/* Test deq offload latency from an empty queue */
+	deq_start_time = rte_rdtsc_precise();
+	for (i = 0, deq_total = 0; deq_total < num_to_process;
+			++i, deq_total += burst_sz) {
+		if (unlikely(num_to_process - deq_total < burst_sz))
+			burst_sz = num_to_process - deq_total;
+		rte_bbdev_dequeue_enc_ops(dev_id, queue_id, ops, burst_sz);
+	}
+	*deq_total_time = rte_rdtsc_precise() - deq_start_time;
+
+	return i;
+}
+
+static int
+offload_latency_empty_q_test(struct active_device *ad,
+		struct test_op_params *op_params)
+{
+	int iter;
+	uint64_t deq_total_time = 0;
+	uint16_t burst_sz = op_params->burst_sz;
+	const uint16_t num_to_process = op_params->num_to_process;
+	const enum rte_bbdev_op_type op_type = test_vector.op_type;
+	const uint16_t queue_id = ad->queue_ids[0];
+	struct rte_bbdev_info info;
+
+	TEST_ASSERT_SUCCESS((burst_sz > MAX_BURST),
+			"BURST_SIZE should be <= %u", MAX_BURST);
+
+	rte_bbdev_info_get(ad->dev_id, &info);
+
+	printf(
+		"Offload latency empty dequeue test: dev: %s, burst size: %u, num ops: %u, op type: %s\n",
+			info.dev_name, burst_sz, num_to_process,
+			rte_bbdev_op_type_str(op_type));
+
+	if (op_type == RTE_BBDEV_OP_TURBO_DEC)
+		iter = offload_latency_empty_q_test_dec(ad->dev_id, queue_id,
+				num_to_process, burst_sz, &deq_total_time);
+	else
+		iter = offload_latency_empty_q_test_enc(ad->dev_id, queue_id,
+				num_to_process, burst_sz, &deq_total_time);
+
+	if (iter < 0)
+		return TEST_FAILED;
+
+	printf("\tempty deq offload avg. latency: %lg cycles, %lg us\n",
+			(double)deq_total_time / (double)iter,
+			(double)(deq_total_time * 1000000) / (double)iter /
+			(double)rte_get_tsc_hz());
+
+	return TEST_SUCCESS;
+}
+
+static int
+throughput_tc(void)
+{
+	return run_test_case(throughput_test);
+}
+
+static int
+offload_latency_tc(void)
+{
+	return run_test_case(offload_latency_test);
+}
+
+static int
+offload_latency_empty_q_tc(void)
+{
+	return run_test_case(offload_latency_empty_q_test);
+}
+
+static int
+operation_latency_tc(void)
+{
+	return run_test_case(operation_latency_test);
+}
+
+static int
+interrupt_tc(void)
+{
+	return run_test_case(throughput_test);
+}
+
+static struct unit_test_suite bbdev_throughput_testsuite = {
+	.suite_name = "BBdev Throughput Tests",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown, throughput_tc),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static struct unit_test_suite bbdev_validation_testsuite = {
+	.suite_name = "BBdev Validation Tests",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown, operation_latency_tc),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static struct unit_test_suite bbdev_latency_testsuite = {
+	.suite_name = "BBdev Latency Tests",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown, offload_latency_tc),
+		TEST_CASE_ST(ut_setup, ut_teardown, offload_latency_empty_q_tc),
+		TEST_CASE_ST(ut_setup, ut_teardown, operation_latency_tc),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static struct unit_test_suite bbdev_interrupt_testsuite = {
+	.suite_name = "BBdev Interrupt Tests",
+	.setup = interrupt_testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown, interrupt_tc),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+REGISTER_TEST_COMMAND(throughput, bbdev_throughput_testsuite);
+REGISTER_TEST_COMMAND(validation, bbdev_validation_testsuite);
+REGISTER_TEST_COMMAND(latency, bbdev_latency_testsuite);
+REGISTER_TEST_COMMAND(interrupt, bbdev_interrupt_testsuite);
diff --git a/app/test-bbdev/test_bbdev_vector.c b/app/test-bbdev/test_bbdev_vector.c
new file mode 100644
index 0000000..8d8596b
--- /dev/null
+++ b/app/test-bbdev/test_bbdev_vector.c
@@ -0,0 +1,963 @@ 
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifdef RTE_EXEC_ENV_BSDAPP
+	#define _WITH_GETLINE
+#endif
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_malloc.h>
+
+#include "test_bbdev_vector.h"
+
+#define VALUE_DELIMITER ","
+#define ENTRY_DELIMITER "="
+
+const char *op_data_prefixes[] = {
+	"input",
+	"soft_output",
+	"hard_output",
+};
+
+/* trim leading and trailing spaces */
+static void
+trim_space(char *str)
+{
+	char *start, *end;
+
+	for (start = str; *start; start++) {
+		if (!isspace((unsigned char) start[0]))
+			break;
+	}
+
+	for (end = start + strlen(start); end > start + 1; end--) {
+		if (!isspace((unsigned char) end[-1]))
+			break;
+	}
+
+	*end = 0;
+
+	/* Shift from "start" to the beginning of the string */
+	if (start > str)
+		memmove(str, start, (end - start) + 1);
+}
+
+static bool
+starts_with(const char *str, const char *pre)
+{
+	return strncmp(pre, str, strlen(pre)) == 0;
+}
+
+/* tokenization test values separated by a comma */
+static int
+parse_values(char *tokens, uint32_t **data, uint32_t *data_length)
+{
+	uint32_t n_tokens = 0;
+	uint32_t data_size = 32;
+
+	uint32_t *values, *values_resized;
+	char *tok, *error = NULL;
+
+	tok = strtok(tokens, VALUE_DELIMITER);
+	if (tok == NULL)
+		return -1;
+
+	values = (uint32_t *)
+			rte_zmalloc(NULL, sizeof(uint32_t) * data_size, 0);
+	if (values == NULL)
+		return -1;
+
+	while (tok != NULL) {
+		values_resized = NULL;
+
+		if (n_tokens >= data_size) {
+			data_size *= 2;
+
+			values_resized = (uint32_t *) rte_realloc(values,
+				sizeof(uint32_t) * data_size, 0);
+			if (values_resized == NULL) {
+				rte_free(values);
+				return -1;
+			}
+			values = values_resized;
+		}
+
+		values[n_tokens] = (uint32_t) strtoul(tok, &error, 0);
+		if ((error == NULL) || (*error != '\0')) {
+			printf("Failed with convert '%s'\n", tok);
+			rte_free(values);
+			return -1;
+		}
+
+		*data_length = *data_length + (strlen(tok) - strlen("0x"))/2;
+
+		tok = strtok(NULL, VALUE_DELIMITER);
+		if (tok == NULL)
+			break;
+
+		n_tokens++;
+	}
+
+	values_resized = (uint32_t *) rte_realloc(values,
+		sizeof(uint32_t) * (n_tokens + 1), 0);
+
+	if (values_resized == NULL) {
+		rte_free(values);
+		return -1;
+	}
+
+	*data = values_resized;
+
+	return 0;
+}
+
+/* convert turbo decoder flag from string to unsigned long int*/
+static int
+op_decoder_flag_strtoul(char *token, uint32_t *op_flag_value)
+{
+	if (!strcmp(token, "RTE_BBDEV_TURBO_SUBBLOCK_DEINTERLEAVE"))
+		*op_flag_value = RTE_BBDEV_TURBO_SUBBLOCK_DEINTERLEAVE;
+	else if (!strcmp(token, "RTE_BBDEV_TURBO_CRC_TYPE_24B"))
+		*op_flag_value = RTE_BBDEV_TURBO_CRC_TYPE_24B;
+	else if (!strcmp(token, "RTE_BBDEV_TURBO_EQUALIZER"))
+		*op_flag_value = RTE_BBDEV_TURBO_EQUALIZER;
+	else if (!strcmp(token, "RTE_BBDEV_TURBO_SOFT_OUT_SATURATE"))
+		*op_flag_value = RTE_BBDEV_TURBO_SOFT_OUT_SATURATE;
+	else if (!strcmp(token, "RTE_BBDEV_TURBO_HALF_ITERATION_EVEN"))
+		*op_flag_value = RTE_BBDEV_TURBO_HALF_ITERATION_EVEN;
+	else if (!strcmp(token, "RTE_BBDEV_TURBO_CONTINUE_CRC_MATCH"))
+		*op_flag_value = RTE_BBDEV_TURBO_CONTINUE_CRC_MATCH;
+	else if (!strcmp(token, "RTE_BBDEV_TURBO_SOFT_OUTPUT"))
+		*op_flag_value = RTE_BBDEV_TURBO_SOFT_OUTPUT;
+	else if (!strcmp(token, "RTE_BBDEV_TURBO_EARLY_TERMINATION"))
+		*op_flag_value = RTE_BBDEV_TURBO_EARLY_TERMINATION;
+	else if (!strcmp(token, "RTE_BBDEV_TURBO_POS_LLR_1_BIT_IN"))
+		*op_flag_value = RTE_BBDEV_TURBO_POS_LLR_1_BIT_IN;
+	else if (!strcmp(token, "RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN"))
+		*op_flag_value = RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN;
+	else if (!strcmp(token, "RTE_BBDEV_TURBO_POS_LLR_1_BIT_SOFT_OUT"))
+		*op_flag_value = RTE_BBDEV_TURBO_POS_LLR_1_BIT_SOFT_OUT;
+	else if (!strcmp(token, "RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT"))
+		*op_flag_value = RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT;
+	else if (!strcmp(token, "RTE_BBDEV_TURBO_MAP_DEC"))
+		*op_flag_value = RTE_BBDEV_TURBO_MAP_DEC;
+	else {
+		printf("The given value is not a turbo decoder flag\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/* convert turbo encoder flag from string to unsigned long int*/
+static int
+op_encoder_flag_strtoul(char *token, uint32_t *op_flag_value)
+{
+	if (!strcmp(token, "RTE_BBDEV_TURBO_RV_INDEX_BYPASS"))
+		*op_flag_value = RTE_BBDEV_TURBO_RV_INDEX_BYPASS;
+	else if (!strcmp(token, "RTE_BBDEV_TURBO_RATE_MATCH"))
+		*op_flag_value = RTE_BBDEV_TURBO_RATE_MATCH;
+	else if (!strcmp(token, "RTE_BBDEV_TURBO_CRC_24B_ATTACH"))
+		*op_flag_value = RTE_BBDEV_TURBO_CRC_24B_ATTACH;
+	else if (!strcmp(token, "RTE_BBDEV_TURBO_CRC_24A_ATTACH"))
+		*op_flag_value = RTE_BBDEV_TURBO_CRC_24A_ATTACH;
+	else if (!strcmp(token, "RTE_BBDEV_TURBO_ENC_SCATTER_GATHER"))
+		*op_flag_value = RTE_BBDEV_TURBO_ENC_SCATTER_GATHER;
+	else {
+		printf("The given value is not a turbo encoder flag\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/* tokenization turbo decoder/encoder flags values separated by a comma */
+static int
+parse_turbo_flags(char *tokens, uint32_t *op_flags,
+		enum rte_bbdev_op_type op_type)
+{
+	char *tok = NULL;
+	uint32_t op_flag_value = 0;
+
+	tok = strtok(tokens, VALUE_DELIMITER);
+	if (tok == NULL)
+		return -1;
+
+	while (tok != NULL) {
+		trim_space(tok);
+		if (op_type == RTE_BBDEV_OP_TURBO_DEC) {
+			if (op_decoder_flag_strtoul(tok, &op_flag_value) == -1)
+				return -1;
+		} else if (op_type == RTE_BBDEV_OP_TURBO_ENC) {
+			if (op_encoder_flag_strtoul(tok, &op_flag_value) == -1)
+				return -1;
+		} else {
+			return -1;
+		}
+
+		*op_flags = *op_flags | op_flag_value;
+
+		tok = strtok(NULL, VALUE_DELIMITER);
+		if (tok == NULL)
+			break;
+	}
+
+	return 0;
+}
+
+/* convert turbo encoder/decoder op_type from string to enum*/
+static int
+op_turbo_type_strtol(char *token, enum rte_bbdev_op_type *op_type)
+{
+	trim_space(token);
+	if (!strcmp(token, "RTE_BBDEV_OP_TURBO_DEC"))
+		*op_type = RTE_BBDEV_OP_TURBO_DEC;
+	else if (!strcmp(token, "RTE_BBDEV_OP_TURBO_ENC"))
+		*op_type = RTE_BBDEV_OP_TURBO_ENC;
+	else if (!strcmp(token, "RTE_BBDEV_OP_NONE"))
+		*op_type = RTE_BBDEV_OP_NONE;
+	else {
+		printf("Not valid turbo op_type: '%s'\n", token);
+		return -1;
+	}
+
+	return 0;
+}
+
+/* tokenization expected status values separated by a comma */
+static int
+parse_expected_status(char *tokens, int *status, enum rte_bbdev_op_type op_type)
+{
+	char *tok = NULL;
+	bool status_ok = false;
+
+	tok = strtok(tokens, VALUE_DELIMITER);
+	if (tok == NULL)
+		return -1;
+
+	while (tok != NULL) {
+		trim_space(tok);
+		if (!strcmp(tok, "OK"))
+			status_ok = true;
+		else if (!strcmp(tok, "DMA"))
+			*status = *status | (1 << RTE_BBDEV_DRV_ERROR);
+		else if (!strcmp(tok, "FCW"))
+			*status = *status | (1 << RTE_BBDEV_DATA_ERROR);
+		else if (!strcmp(tok, "CRC")) {
+			if (op_type == RTE_BBDEV_OP_TURBO_DEC)
+				*status = *status | (1 << RTE_BBDEV_CRC_ERROR);
+			else {
+				printf(
+						"CRC is only a valid value for turbo decoder\n");
+				return -1;
+			}
+		} else {
+			printf("Not valid status: '%s'\n", tok);
+			return -1;
+		}
+
+		tok = strtok(NULL, VALUE_DELIMITER);
+		if (tok == NULL)
+			break;
+	}
+
+	if (status_ok && *status != 0) {
+		printf(
+				"Not valid status values. Cannot be OK and ERROR at the same time.\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/* parse ops data entry (there can be more than 1 input entry, each will be
+ * contained in a separate op_data_buf struct)
+ */
+static int
+parse_data_entry(const char *key_token, char *token,
+		struct test_bbdev_vector *vector, enum op_data_type type,
+		const char *prefix)
+{
+	int ret;
+	uint32_t data_length = 0;
+	uint32_t *data = NULL;
+	unsigned int id;
+	struct op_data_buf *op_data;
+	unsigned int *nb_ops;
+
+	if (type > DATA_NUM_TYPES) {
+		printf("Unknown op type: %d!\n", type);
+		return -1;
+	}
+
+	op_data = vector->entries[type].segments;
+	nb_ops = &vector->entries[type].nb_segments;
+
+	if (*nb_ops >= RTE_BBDEV_MAX_CODE_BLOCKS) {
+		printf("Too many segments (code blocks defined): %u, max %d!\n",
+				*nb_ops, RTE_BBDEV_MAX_CODE_BLOCKS);
+		return -1;
+	}
+
+	if (sscanf(key_token + strlen(prefix), "%u", &id) != 1) {
+		printf("Missing ID of %s\n", prefix);
+		return -1;
+	}
+	if (id != *nb_ops) {
+		printf(
+			"Please order data entries sequentially, i.e. %s0, %s1, ...\n",
+				prefix, prefix);
+		return -1;
+	}
+
+	/* Clear new op data struct */
+	memset(op_data + *nb_ops, 0, sizeof(struct op_data_buf));
+
+	ret = parse_values(token, &data, &data_length);
+	if (!ret) {
+		op_data[*nb_ops].addr = data;
+		op_data[*nb_ops].length = data_length;
+		++(*nb_ops);
+	}
+
+	return ret;
+}
+
+/* parses turbo decoder parameters and assigns to global variable */
+static int
+parse_decoder_params(const char *key_token, char *token,
+		struct test_bbdev_vector *vector)
+{
+	int ret = 0, status = 0;
+	uint32_t op_flags = 0;
+	char *err = NULL;
+
+	struct rte_bbdev_op_turbo_dec *turbo_dec = &vector->turbo_dec;
+
+	/* compare keys */
+	if (starts_with(key_token, op_data_prefixes[DATA_INPUT]))
+		ret = parse_data_entry(key_token, token, vector,
+				DATA_INPUT, op_data_prefixes[DATA_INPUT]);
+
+	else if (starts_with(key_token, op_data_prefixes[DATA_SOFT_OUTPUT]))
+		ret = parse_data_entry(key_token, token, vector,
+				DATA_SOFT_OUTPUT,
+				op_data_prefixes[DATA_SOFT_OUTPUT]);
+
+	else if (starts_with(key_token, op_data_prefixes[DATA_HARD_OUTPUT]))
+		ret = parse_data_entry(key_token, token, vector,
+				DATA_HARD_OUTPUT,
+				op_data_prefixes[DATA_HARD_OUTPUT]);
+	else if (!strcmp(key_token, "e")) {
+		vector->mask |= TEST_BBDEV_VF_E;
+		turbo_dec->cb_params.e = (uint32_t) strtoul(token, &err, 0);
+	} else if (!strcmp(key_token, "ea")) {
+		vector->mask |= TEST_BBDEV_VF_EA;
+		turbo_dec->tb_params.ea = (uint32_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "eb")) {
+		vector->mask |= TEST_BBDEV_VF_EB;
+		turbo_dec->tb_params.eb = (uint32_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "k")) {
+		vector->mask |= TEST_BBDEV_VF_K;
+		turbo_dec->cb_params.k = (uint16_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "k_pos")) {
+		vector->mask |= TEST_BBDEV_VF_K_POS;
+		turbo_dec->tb_params.k_pos = (uint16_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "k_neg")) {
+		vector->mask |= TEST_BBDEV_VF_K_NEG;
+		turbo_dec->tb_params.k_neg = (uint16_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "c")) {
+		vector->mask |= TEST_BBDEV_VF_C;
+		turbo_dec->tb_params.c = (uint16_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "c_neg")) {
+		vector->mask |= TEST_BBDEV_VF_C_NEG;
+		turbo_dec->tb_params.c_neg = (uint16_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "cab")) {
+		vector->mask |= TEST_BBDEV_VF_CAB;
+		turbo_dec->tb_params.cab = (uint8_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "rv_index")) {
+		vector->mask |= TEST_BBDEV_VF_RV_INDEX;
+		turbo_dec->rv_index = (uint8_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "iter_max")) {
+		vector->mask |= TEST_BBDEV_VF_ITER_MAX;
+		turbo_dec->iter_max = (uint8_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "iter_min")) {
+		vector->mask |= TEST_BBDEV_VF_ITER_MIN;
+		turbo_dec->iter_min = (uint8_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "expected_iter_count")) {
+		vector->mask |= TEST_BBDEV_VF_EXPECTED_ITER_COUNT;
+		turbo_dec->iter_count = (uint8_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "ext_scale")) {
+		vector->mask |= TEST_BBDEV_VF_EXT_SCALE;
+		turbo_dec->ext_scale = (uint8_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "num_maps")) {
+		vector->mask |= TEST_BBDEV_VF_NUM_MAPS;
+		turbo_dec->num_maps = (uint8_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "code_block_mode")) {
+		vector->mask |= TEST_BBDEV_VF_CODE_BLOCK_MODE;
+		turbo_dec->code_block_mode = (uint8_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "op_flags")) {
+		vector->mask |= TEST_BBDEV_VF_OP_FLAGS;
+		ret = parse_turbo_flags(token, &op_flags,
+			vector->op_type);
+		if (!ret)
+			turbo_dec->op_flags = op_flags;
+	} else if (!strcmp(key_token, "expected_status")) {
+		vector->mask |= TEST_BBDEV_VF_EXPECTED_STATUS;
+		ret = parse_expected_status(token, &status, vector->op_type);
+		if (!ret)
+			vector->expected_status = status;
+	} else {
+		printf("Not valid dec key: '%s'\n", key_token);
+		return -1;
+	}
+
+	if (ret != 0) {
+		printf("Failed with convert '%s\t%s'\n", key_token, token);
+		return -1;
+	}
+
+	return 0;
+}
+
+/* parses turbo encoder parameters and assigns to global variable */
+static int
+parse_encoder_params(const char *key_token, char *token,
+		struct test_bbdev_vector *vector)
+{
+	int ret = 0, status = 0;
+	uint32_t op_flags = 0;
+	char *err = NULL;
+
+
+	struct rte_bbdev_op_turbo_enc *turbo_enc = &vector->turbo_enc;
+
+	if (starts_with(key_token, op_data_prefixes[DATA_INPUT]))
+		ret = parse_data_entry(key_token, token, vector,
+				DATA_INPUT, op_data_prefixes[DATA_INPUT]);
+	else if (starts_with(key_token, "output"))
+		ret = parse_data_entry(key_token, token, vector,
+				DATA_HARD_OUTPUT, "output");
+	else if (!strcmp(key_token, "e")) {
+		vector->mask |= TEST_BBDEV_VF_E;
+		turbo_enc->cb_params.e = (uint32_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "ea")) {
+		vector->mask |= TEST_BBDEV_VF_EA;
+		turbo_enc->tb_params.ea = (uint32_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "eb")) {
+		vector->mask |= TEST_BBDEV_VF_EB;
+		turbo_enc->tb_params.eb = (uint32_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "k")) {
+		vector->mask |= TEST_BBDEV_VF_K;
+		turbo_enc->cb_params.k = (uint16_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "k_neg")) {
+		vector->mask |= TEST_BBDEV_VF_K_NEG;
+		turbo_enc->tb_params.k_neg = (uint16_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "k_pos")) {
+		vector->mask |= TEST_BBDEV_VF_K_POS;
+		turbo_enc->tb_params.k_pos = (uint16_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "c_neg")) {
+		vector->mask |= TEST_BBDEV_VF_C_NEG;
+		turbo_enc->tb_params.c_neg = (uint8_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "c")) {
+		vector->mask |= TEST_BBDEV_VF_C;
+		turbo_enc->tb_params.c = (uint8_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "cab")) {
+		vector->mask |= TEST_BBDEV_VF_CAB;
+		turbo_enc->tb_params.cab = (uint8_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "rv_index")) {
+		vector->mask |= TEST_BBDEV_VF_RV_INDEX;
+		turbo_enc->rv_index = (uint8_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "ncb")) {
+		vector->mask |= TEST_BBDEV_VF_NCB;
+		turbo_enc->cb_params.ncb = (uint16_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "ncb_neg")) {
+		vector->mask |= TEST_BBDEV_VF_NCB_NEG;
+		turbo_enc->tb_params.ncb_neg =
+				(uint16_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "ncb_pos")) {
+		vector->mask |= TEST_BBDEV_VF_NCB_POS;
+		turbo_enc->tb_params.ncb_pos =
+				(uint16_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "r")) {
+		vector->mask |= TEST_BBDEV_VF_R;
+		turbo_enc->tb_params.r = (uint8_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "code_block_mode")) {
+		vector->mask |= TEST_BBDEV_VF_CODE_BLOCK_MODE;
+		turbo_enc->code_block_mode = (uint8_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "op_flags")) {
+		vector->mask |= TEST_BBDEV_VF_OP_FLAGS;
+		ret = parse_turbo_flags(token, &op_flags,
+				vector->op_type);
+		if (!ret)
+			turbo_enc->op_flags = op_flags;
+	} else if (!strcmp(key_token, "expected_status")) {
+		vector->mask |= TEST_BBDEV_VF_EXPECTED_STATUS;
+		ret = parse_expected_status(token, &status, vector->op_type);
+		if (!ret)
+			vector->expected_status = status;
+	} else {
+		printf("Not valid enc key: '%s'\n", key_token);
+		return -1;
+	}
+
+	if (ret != 0) {
+		printf("Failed with convert '%s\t%s'\n", key_token, token);
+		return -1;
+	}
+
+	return 0;
+}
+
+/* checks the type of key and assigns data */
+static int
+parse_entry(char *entry, struct test_bbdev_vector *vector)
+{
+	int ret = 0;
+	char *token, *key_token;
+	enum rte_bbdev_op_type op_type = RTE_BBDEV_OP_NONE;
+
+	if (entry == NULL) {
+		printf("Expected entry value\n");
+		return -1;
+	}
+
+	/* get key */
+	token = strtok(entry, ENTRY_DELIMITER);
+	key_token = token;
+	/* get values for key */
+	token = strtok(NULL, ENTRY_DELIMITER);
+
+	if (key_token == NULL || token == NULL) {
+		printf("Expected 'key = values' but was '%.40s'..\n", entry);
+		return -1;
+	}
+	trim_space(key_token);
+
+	/* first key_token has to specify type of operation */
+	if (vector->op_type == RTE_BBDEV_OP_NONE) {
+		if (!strcmp(key_token, "op_type")) {
+			ret = op_turbo_type_strtol(token, &op_type);
+			if (!ret)
+				vector->op_type = op_type;
+			return (!ret) ? 0 : -1;
+		}
+		printf("First key_token (%s) does not specify op_type\n",
+				key_token);
+		return -1;
+	}
+
+	/* compare keys */
+	if (vector->op_type == RTE_BBDEV_OP_TURBO_DEC) {
+		if (parse_decoder_params(key_token, token, vector) == -1)
+			return -1;
+	} else if (vector->op_type == RTE_BBDEV_OP_TURBO_ENC) {
+		if (parse_encoder_params(key_token, token, vector) == -1)
+			return -1;
+	}
+
+	return 0;
+}
+
+static int
+check_decoder_segments(struct test_bbdev_vector *vector)
+{
+	unsigned char i;
+	struct rte_bbdev_op_turbo_dec *turbo_dec = &vector->turbo_dec;
+
+	if (vector->entries[DATA_INPUT].nb_segments == 0)
+		return -1;
+
+	for (i = 0; i < vector->entries[DATA_INPUT].nb_segments; i++)
+		if (vector->entries[DATA_INPUT].segments[i].addr == NULL)
+			return -1;
+
+	if (vector->entries[DATA_HARD_OUTPUT].nb_segments == 0)
+		return -1;
+
+	for (i = 0; i < vector->entries[DATA_HARD_OUTPUT].nb_segments;
+			i++)
+		if (vector->entries[DATA_HARD_OUTPUT].segments[i].addr == NULL)
+			return -1;
+
+	if ((turbo_dec->op_flags & RTE_BBDEV_TURBO_SOFT_OUTPUT) &&
+			(vector->entries[DATA_SOFT_OUTPUT].nb_segments == 0))
+		return -1;
+
+	for (i = 0; i < vector->entries[DATA_SOFT_OUTPUT].nb_segments;
+			i++)
+		if (vector->entries[DATA_SOFT_OUTPUT].segments[i].addr == NULL)
+			return -1;
+
+	return 0;
+}
+
+static int
+check_decoder_llr_spec(struct test_bbdev_vector *vector)
+{
+	struct rte_bbdev_op_turbo_dec *turbo_dec = &vector->turbo_dec;
+
+	/* Check input LLR sign formalism specification */
+	if ((turbo_dec->op_flags & RTE_BBDEV_TURBO_POS_LLR_1_BIT_IN) &&
+			(turbo_dec->op_flags &
+			RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN)) {
+		printf(
+			"Both positive and negative LLR input flags were set!\n");
+		return -1;
+	}
+	if (!(turbo_dec->op_flags & RTE_BBDEV_TURBO_POS_LLR_1_BIT_IN) &&
+			!(turbo_dec->op_flags &
+			RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN)) {
+		printf(
+			"WARNING: input LLR sign formalism was not specified and will be set to negative LLR for '1' bit\n");
+		turbo_dec->op_flags |= RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN;
+	}
+
+	if (!(turbo_dec->op_flags & RTE_BBDEV_TURBO_SOFT_OUTPUT))
+		return 0;
+
+	/* Check output LLR sign formalism specification */
+	if ((turbo_dec->op_flags & RTE_BBDEV_TURBO_POS_LLR_1_BIT_SOFT_OUT) &&
+			(turbo_dec->op_flags &
+			RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT)) {
+		printf(
+			"Both positive and negative LLR output flags were set!\n");
+		return -1;
+	}
+	if (!(turbo_dec->op_flags & RTE_BBDEV_TURBO_POS_LLR_1_BIT_SOFT_OUT) &&
+			!(turbo_dec->op_flags &
+			RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT)) {
+		printf(
+			"WARNING: soft output LLR sign formalism was not specified and will be set to negative LLR for '1' bit\n");
+		turbo_dec->op_flags |=
+				RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT;
+	}
+
+	return 0;
+}
+
+/* checks decoder parameters */
+static int
+check_decoder(struct test_bbdev_vector *vector)
+{
+	struct rte_bbdev_op_turbo_dec *turbo_dec = &vector->turbo_dec;
+	const int mask = vector->mask;
+
+	if (check_decoder_segments(vector) < 0)
+		return -1;
+
+	if (check_decoder_llr_spec(vector) < 0)
+		return -1;
+
+	/* Check which params were set */
+	if (!(mask & TEST_BBDEV_VF_CODE_BLOCK_MODE)) {
+		printf(
+			"WARNING: code_block_mode was not specified in vector file and will be set to 1 (0 - TB Mode, 1 - CB mode)\n");
+		turbo_dec->code_block_mode = 1;
+	}
+	if (turbo_dec->code_block_mode == 0) {
+		if (!(mask & TEST_BBDEV_VF_EA))
+			printf(
+				"WARNING: ea was not specified in vector file and will be set to 0\n");
+		if (!(mask & TEST_BBDEV_VF_EB))
+			printf(
+				"WARNING: eb was not specified in vector file and will be set to 0\n");
+		if (!(mask & TEST_BBDEV_VF_K_NEG))
+			printf(
+				"WARNING: k_neg was not specified in vector file and will be set to 0\n");
+		if (!(mask & TEST_BBDEV_VF_K_POS))
+			printf(
+				"WARNING: k_pos was not specified in vector file and will be set to 0\n");
+		if (!(mask & TEST_BBDEV_VF_C_NEG))
+			printf(
+				"WARNING: c_neg was not specified in vector file and will be set to 0\n");
+		if (!(mask & TEST_BBDEV_VF_C)) {
+			printf(
+				"WARNING: c was not specified in vector file and will be set to 1\n");
+			turbo_dec->tb_params.c = 1;
+		}
+		if (!(mask & TEST_BBDEV_VF_CAB))
+			printf(
+				"WARNING: cab was not specified in vector file and will be set to 0\n");
+	} else {
+		if (!(mask & TEST_BBDEV_VF_E))
+			printf(
+				"WARNING: e was not specified in vector file and will be set to 0\n");
+		if (!(mask & TEST_BBDEV_VF_K))
+			printf(
+				"WARNING: k was not specified in vector file and will be set to 0\n");
+	}
+	if (!(mask & TEST_BBDEV_VF_RV_INDEX))
+		printf(
+			"WARNING: rv_index was not specified in vector file and will be set to 0\n");
+	if (!(mask & TEST_BBDEV_VF_ITER_MIN))
+		printf(
+			"WARNING: iter_min was not specified in vector file and will be set to 0\n");
+	if (!(mask & TEST_BBDEV_VF_ITER_MAX))
+		printf(
+			"WARNING: iter_max was not specified in vector file and will be set to 0\n");
+	if (!(mask & TEST_BBDEV_VF_EXPECTED_ITER_COUNT))
+		printf(
+			"WARNING: expected_iter_count was not specified in vector file and iter_count will not be validated\n");
+	if (!(mask & TEST_BBDEV_VF_EXT_SCALE))
+		printf(
+			"WARNING: ext_scale was not specified in vector file and will be set to 0\n");
+	if (!(mask & TEST_BBDEV_VF_OP_FLAGS)) {
+		printf(
+			"WARNING: op_flags was not specified in vector file and capabilities will not be validated\n");
+		turbo_dec->num_maps = 0;
+	} else if (!(turbo_dec->op_flags & RTE_BBDEV_TURBO_MAP_DEC) &&
+			mask & TEST_BBDEV_VF_NUM_MAPS) {
+		printf(
+			"WARNING: RTE_BBDEV_TURBO_MAP_DEC was not set in vector file and num_maps will be set to 0\n");
+		turbo_dec->num_maps = 0;
+	}
+	if (!(mask & TEST_BBDEV_VF_EXPECTED_STATUS))
+		printf(
+			"WARNING: expected_status was not specified in vector file and will be set to 0\n");
+	return 0;
+}
+
+/* checks encoder parameters */
+static int
+check_encoder(struct test_bbdev_vector *vector)
+{
+	unsigned char i;
+	const int mask = vector->mask;
+
+	if (vector->entries[DATA_INPUT].nb_segments == 0)
+		return -1;
+
+	for (i = 0; i < vector->entries[DATA_INPUT].nb_segments; i++)
+		if (vector->entries[DATA_INPUT].segments[i].addr == NULL)
+			return -1;
+
+	if (vector->entries[DATA_HARD_OUTPUT].nb_segments == 0)
+		return -1;
+
+	for (i = 0; i < vector->entries[DATA_HARD_OUTPUT].nb_segments; i++)
+		if (vector->entries[DATA_HARD_OUTPUT].segments[i].addr == NULL)
+			return -1;
+
+	if (!(mask & TEST_BBDEV_VF_CODE_BLOCK_MODE)) {
+		printf(
+			"WARNING: code_block_mode was not specified in vector file and will be set to 1\n");
+		vector->turbo_enc.code_block_mode = 1;
+	}
+	if (vector->turbo_enc.code_block_mode == 0) {
+		if (!(mask & TEST_BBDEV_VF_EA) && (vector->turbo_enc.op_flags &
+				RTE_BBDEV_TURBO_RATE_MATCH))
+			printf(
+				"WARNING: ea was not specified in vector file and will be set to 0\n");
+		if (!(mask & TEST_BBDEV_VF_EB) && (vector->turbo_enc.op_flags &
+				RTE_BBDEV_TURBO_RATE_MATCH))
+			printf(
+				"WARNING: eb was not specified in vector file and will be set to 0\n");
+		if (!(mask & TEST_BBDEV_VF_K_NEG))
+			printf(
+				"WARNING: k_neg was not specified in vector file and will be set to 0\n");
+		if (!(mask & TEST_BBDEV_VF_K_POS))
+			printf(
+				"WARNING: k_pos was not specified in vector file and will be set to 0\n");
+		if (!(mask & TEST_BBDEV_VF_C_NEG))
+			printf(
+				"WARNING: c_neg was not specified in vector file and will be set to 0\n");
+		if (!(mask & TEST_BBDEV_VF_C)) {
+			printf(
+				"WARNING: c was not specified in vector file and will be set to 1\n");
+			vector->turbo_enc.tb_params.c = 1;
+		}
+		if (!(mask & TEST_BBDEV_VF_CAB) && (vector->turbo_enc.op_flags &
+				RTE_BBDEV_TURBO_RATE_MATCH))
+			printf(
+				"WARNING: cab was not specified in vector file and will be set to 0\n");
+		if (!(mask & TEST_BBDEV_VF_NCB_NEG))
+			printf(
+				"WARNING: ncb_neg was not specified in vector file and will be set to 0\n");
+		if (!(mask & TEST_BBDEV_VF_NCB_POS))
+			printf(
+				"WARNING: ncb_pos was not specified in vector file and will be set to 0\n");
+		if (!(mask & TEST_BBDEV_VF_R))
+			printf(
+				"WARNING: r was not specified in vector file and will be set to 0\n");
+	} else {
+		if (!(mask & TEST_BBDEV_VF_E) && (vector->turbo_enc.op_flags &
+				RTE_BBDEV_TURBO_RATE_MATCH))
+			printf(
+				"WARNING: e was not specified in vector file and will be set to 0\n");
+		if (!(mask & TEST_BBDEV_VF_K))
+			printf(
+				"WARNING: k was not specified in vector file and will be set to 0\n");
+		if (!(mask & TEST_BBDEV_VF_NCB))
+			printf(
+				"WARNING: ncb was not specified in vector file and will be set to 0\n");
+	}
+	if (!(mask & TEST_BBDEV_VF_RV_INDEX))
+		printf(
+			"WARNING: rv_index was not specified in vector file and will be set to 0\n");
+	if (!(mask & TEST_BBDEV_VF_OP_FLAGS))
+		printf(
+			"WARNING: op_flags was not specified in vector file and capabilities will not be validated\n");
+	if (!(mask & TEST_BBDEV_VF_EXPECTED_STATUS))
+		printf(
+			"WARNING: expected_status was not specified in vector file and will be set to 0\n");
+
+	return 0;
+}
+
+static int
+bbdev_check_vector(struct test_bbdev_vector *vector)
+{
+	if (vector->op_type == RTE_BBDEV_OP_TURBO_DEC) {
+		if (check_decoder(vector) == -1)
+			return -1;
+	} else if (vector->op_type == RTE_BBDEV_OP_TURBO_ENC) {
+		if (check_encoder(vector) == -1)
+			return -1;
+	} else if (vector->op_type != RTE_BBDEV_OP_NONE) {
+		printf("Vector was not filled\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+int
+test_bbdev_vector_read(const char *filename,
+		struct test_bbdev_vector *vector)
+{
+	int ret = 0;
+	size_t len = 0;
+
+	FILE *fp = NULL;
+	char *line = NULL;
+	char *entry = NULL;
+
+	fp = fopen(filename, "r");
+	if (fp == NULL) {
+		printf("File %s does not exist\n", filename);
+		return -1;
+	}
+
+	while (getline(&line, &len, fp) != -1) {
+
+		/* ignore comments and new lines */
+		if (line[0] == '#' || line[0] == '/' || line[0] == '\n'
+			|| line[0] == '\r')
+			continue;
+
+		trim_space(line);
+
+		/* buffer for multiline */
+		entry = realloc(entry, strlen(line) + 1);
+		if (entry == NULL) {
+			printf("Fail to realloc %zu bytes\n", strlen(line) + 1);
+			ret = -ENOMEM;
+			goto exit;
+		}
+
+		memset(entry, 0, strlen(line) + 1);
+		strncpy(entry, line, strlen(line));
+
+		/* check if entry ends with , or = */
+		if (entry[strlen(entry) - 1] == ','
+			|| entry[strlen(entry) - 1] == '=') {
+			while (getline(&line, &len, fp) != -1) {
+				trim_space(line);
+
+				/* extend entry about length of new line */
+				char *entry_extended = realloc(entry,
+						strlen(line) +
+						strlen(entry) + 1);
+
+				if (entry_extended == NULL) {
+					printf("Fail to allocate %zu bytes\n",
+							strlen(line) +
+							strlen(entry) + 1);
+					ret = -ENOMEM;
+					goto exit;
+				}
+
+				entry = entry_extended;
+				strncat(entry, line, strlen(line));
+
+				if (entry[strlen(entry) - 1] != ',')
+					break;
+			}
+		}
+		ret = parse_entry(entry, vector);
+		if (ret != 0) {
+			printf("An error occurred while parsing!\n");
+			goto exit;
+		}
+	}
+	ret = bbdev_check_vector(vector);
+	if (ret != 0)
+		printf("An error occurred while checking!\n");
+
+exit:
+	fclose(fp);
+	free(line);
+	free(entry);
+
+	return ret;
+}
diff --git a/app/test-bbdev/test_bbdev_vector.h b/app/test-bbdev/test_bbdev_vector.h
new file mode 100644
index 0000000..7f41d20
--- /dev/null
+++ b/app/test-bbdev/test_bbdev_vector.h
@@ -0,0 +1,99 @@ 
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef TEST_BBDEV_VECTOR_H_
+#define TEST_BBDEV_VECTOR_H_
+
+#include <rte_bbdev_op.h>
+
+/* Flags which are set when specific parameter is define in vector file */
+enum {
+	TEST_BBDEV_VF_E = (1ULL << 0),
+	TEST_BBDEV_VF_EA = (1ULL << 1),
+	TEST_BBDEV_VF_EB = (1ULL << 2),
+	TEST_BBDEV_VF_K = (1ULL << 3),
+	TEST_BBDEV_VF_K_NEG = (1ULL << 4),
+	TEST_BBDEV_VF_K_POS = (1ULL << 5),
+	TEST_BBDEV_VF_C_NEG = (1ULL << 6),
+	TEST_BBDEV_VF_C = (1ULL << 7),
+	TEST_BBDEV_VF_CAB = (1ULL << 8),
+	TEST_BBDEV_VF_RV_INDEX = (1ULL << 9),
+	TEST_BBDEV_VF_ITER_MAX = (1ULL << 10),
+	TEST_BBDEV_VF_ITER_MIN = (1ULL << 11),
+	TEST_BBDEV_VF_EXPECTED_ITER_COUNT = (1ULL << 12),
+	TEST_BBDEV_VF_EXT_SCALE = (1ULL << 13),
+	TEST_BBDEV_VF_NUM_MAPS = (1ULL << 14),
+	TEST_BBDEV_VF_NCB = (1ULL << 15),
+	TEST_BBDEV_VF_NCB_NEG = (1ULL << 16),
+	TEST_BBDEV_VF_NCB_POS = (1ULL << 17),
+	TEST_BBDEV_VF_R = (1ULL << 18),
+	TEST_BBDEV_VF_CODE_BLOCK_MODE = (1ULL << 19),
+	TEST_BBDEV_VF_OP_FLAGS = (1ULL << 20),
+	TEST_BBDEV_VF_EXPECTED_STATUS = (1ULL << 21),
+};
+
+enum op_data_type {
+	DATA_INPUT = 0,
+	DATA_SOFT_OUTPUT,
+	DATA_HARD_OUTPUT,
+	DATA_NUM_TYPES,
+};
+
+struct op_data_buf {
+	uint32_t *addr;
+	uint32_t length;
+};
+
+struct op_data_entries {
+	struct op_data_buf segments[RTE_BBDEV_MAX_CODE_BLOCKS];
+	unsigned int nb_segments;
+};
+
+struct test_bbdev_vector {
+	enum rte_bbdev_op_type op_type;
+	int expected_status;
+	int mask;
+	union {
+		struct rte_bbdev_op_turbo_dec turbo_dec;
+		struct rte_bbdev_op_turbo_enc turbo_enc;
+	};
+	/* Additional storage for op data entries */
+	struct op_data_entries entries[DATA_NUM_TYPES];
+};
+
+/* fills test vector parameters based on test file */
+int
+test_bbdev_vector_read(const char *filename,
+		struct test_bbdev_vector *vector);
+
+
+#endif /* TEST_BBDEV_VECTOR_H_ */
diff --git a/app/test-bbdev/test_vectors/bbdev_vector_null.data b/app/test-bbdev/test_vectors/bbdev_vector_null.data
new file mode 100644
index 0000000..91aea62
--- /dev/null
+++ b/app/test-bbdev/test_vectors/bbdev_vector_null.data
@@ -0,0 +1,32 @@ 
+#   BSD LICENSE
+#
+#   Copyright(c) 2017 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+op_type =
+RTE_BBDEV_OP_NONE
\ No newline at end of file
diff --git a/app/test-bbdev/test_vectors/bbdev_vector_td_default.data b/app/test-bbdev/test_vectors/bbdev_vector_td_default.data
new file mode 100644
index 0000000..d69ca6b
--- /dev/null
+++ b/app/test-bbdev/test_vectors/bbdev_vector_td_default.data
@@ -0,0 +1,81 @@ 
+#   BSD LICENSE
+#
+#   Copyright(c) 2017 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+op_type =
+RTE_BBDEV_OP_TURBO_DEC
+
+input0 =
+0x7f007f00, 0x7f817f00, 0x767f8100, 0x817f8100, 0x81008100, 0x7f818100, 0x81817f00, 0x7f818100,
+0x86007f00, 0x7f818100, 0x887f8100, 0x81815200, 0x81008100, 0x817f7f00, 0x7f7f8100, 0x9e817f00,
+0x7f7f0000, 0xb97f0000, 0xa7810000, 0x7f7f4a7f, 0x7f810000, 0x7f7f7f7f, 0x81720000, 0x40658181,
+0x84810000, 0x817f0000, 0x81810000, 0x7f818181, 0x7f810000, 0x81815a81, 0x817f0000, 0x7a867f7b,
+0x817f0000, 0x6b7f0000, 0x7f810000, 0x81818181, 0x817f0000, 0x7f7f817f, 0x7f7f0000, 0xab7f4f7f,
+0x817f0000, 0x817f6c00, 0x81810000, 0x817f8181, 0x7f810000, 0x81816981, 0x7f7f0000, 0x007f8181
+
+hard_output0 =
+0xa7d6732e, 0x61
+
+soft_output0 =
+0x7f7f7f7f, 0x81817f7f, 0x7f817f81, 0x817f7f81, 0x81817f81, 0x81817f81, 0x8181817f, 0x7f81817f,
+0x7f81817f, 0x7f817f7f, 0x81817f7f, 0x817f8181, 0x81818181, 0x817f7f7f, 0x7f818181, 0x817f817f,
+0x81818181, 0x81817f7f, 0x7f817f81, 0x7f81817f, 0x817f7f7f, 0x817f7f7f, 0x7f81817f, 0x817f817f,
+0x81817f7f, 0x81817f7f, 0x81817f7f, 0x7f817f7f, 0x817f7f81, 0x7f7f8181, 0x81817f81, 0x817f7f7f,
+0x7f7f8181
+
+e =
+17280
+
+k =
+40
+
+rv_index =
+1
+
+iter_max =
+8
+
+iter_min =
+4
+
+expected_iter_count =
+8
+
+ext_scale =
+15
+
+num_maps =
+0
+
+op_flags =
+RTE_BBDEV_TURBO_SOFT_OUTPUT, RTE_BBDEV_TURBO_SUBBLOCK_DEINTERLEAVE, RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN,
+RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT
+
+expected_status =
+OK
diff --git a/app/test-bbdev/test_vectors/bbdev_vector_te_default.data b/app/test-bbdev/test_vectors/bbdev_vector_te_default.data
new file mode 100644
index 0000000..b5cecf4
--- /dev/null
+++ b/app/test-bbdev/test_vectors/bbdev_vector_te_default.data
@@ -0,0 +1,60 @@ 
+#   BSD LICENSE
+#
+#   Copyright(c) 2017 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+op_type =
+RTE_BBDEV_OP_TURBO_ENC
+
+input0 =
+0x11d2bcac, 0x4d
+
+output0 =
+0xd2399179, 0x640eb999, 0x2cbaf577, 0xaf224ae2, 0x9d139927, 0xe6909b29, 0xa25b7f47, 0x2aa224ce,
+0x79f2
+
+e =
+272
+
+k =
+40
+
+ncb =
+192
+
+rv_index =
+0
+
+code_block_mode =
+1
+
+op_flags =
+RTE_BBDEV_TURBO_RATE_MATCH
+
+expected_status =
+OK