[v2,06/22] event/dlb2: add probe
diff mbox series

Message ID 1602958879-8558-7-git-send-email-timothy.mcdaniel@intel.com
State Changes Requested
Delegated to: Jerin Jacob
Headers show
Series
  • Add DLB2 PMD
Related show

Checks

Context Check Description
ci/checkpatch warning coding style issues

Commit Message

Timothy McDaniel Oct. 17, 2020, 6:21 p.m. UTC
The DLB2 hardware is a PCI device. This commit adds
support for probe and other initialization. The
dlb2_iface.[ch] files implement a flexible interface
that supports both the PF PMD and the bifurcated PMD.
The bifurcated PMD will be released in a future
patch set. Note that the flexible interface is only
used for configuration, and is not used in the data
path. The shared code is added in pf/base.
This PMD supports command line parameters, and those
are parsed at probe-time.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb2/dlb2.c                      |  532 +++++
 drivers/event/dlb2/dlb2_iface.c                |   28 +
 drivers/event/dlb2/dlb2_iface.h                |   29 +
 drivers/event/dlb2/meson.build                 |    6 +-
 drivers/event/dlb2/pf/base/dlb2_hw_types.h     |  367 ++++
 drivers/event/dlb2/pf/base/dlb2_mbox.h         |  596 ++++++
 drivers/event/dlb2/pf/base/dlb2_osdep.h        |  247 +++
 drivers/event/dlb2/pf/base/dlb2_osdep_bitmap.h |  440 +++++
 drivers/event/dlb2/pf/base/dlb2_osdep_list.h   |  131 ++
 drivers/event/dlb2/pf/base/dlb2_osdep_types.h  |   31 +
 drivers/event/dlb2/pf/base/dlb2_regs.h         | 2527 ++++++++++++++++++++++++
 drivers/event/dlb2/pf/base/dlb2_resource.c     |  274 +++
 drivers/event/dlb2/pf/base/dlb2_resource.h     | 1913 ++++++++++++++++++
 drivers/event/dlb2/pf/dlb2_main.c              |  615 ++++++
 drivers/event/dlb2/pf/dlb2_main.h              |  106 +
 drivers/event/dlb2/pf/dlb2_pf.c                |  244 +++
 16 files changed, 8085 insertions(+), 1 deletion(-)
 create mode 100644 drivers/event/dlb2/dlb2.c
 create mode 100644 drivers/event/dlb2/dlb2_iface.c
 create mode 100644 drivers/event/dlb2/dlb2_iface.h
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_hw_types.h
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_mbox.h
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_osdep.h
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_osdep_bitmap.h
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_osdep_list.h
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_osdep_types.h
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_regs.h
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_resource.c
 create mode 100644 drivers/event/dlb2/pf/base/dlb2_resource.h
 create mode 100644 drivers/event/dlb2/pf/dlb2_main.c
 create mode 100644 drivers/event/dlb2/pf/dlb2_main.h
 create mode 100644 drivers/event/dlb2/pf/dlb2_pf.c

Comments

Jerin Jacob Oct. 18, 2020, 8:39 a.m. UTC | #1
On Sat, Oct 17, 2020 at 11:52 PM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> The DLB2 hardware is a PCI device. This commit adds
> support for probe and other initialization. The
> dlb2_iface.[ch] files implement a flexible interface
> that supports both the PF PMD and the bifurcated PMD.
> The bifurcated PMD will be released in a future
> patch set. Note that the flexible interface is only
> used for configuration, and is not used in the data
> path. The shared code is added in pf/base.
> This PMD supports command line parameters, and those
> are parsed at probe-time.
>
> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>



There is a build issue clang10.

[for-main]dell[dpdk-next-eventdev] $ clang -v
clang version 10.0.1
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin
Found candidate GCC installation: /usr/bin/../lib/gcc/x86_64-pc-linux-gnu/10.2.0
Found candidate GCC installation: /usr/bin/../lib/gcc/x86_64-pc-linux-gnu/8.4.0
Found candidate GCC installation:
/usr/bin/../lib64/gcc/x86_64-pc-linux-gnu/10.2.0
Found candidate GCC installation:
/usr/bin/../lib64/gcc/x86_64-pc-linux-gnu/8.4.0
Found candidate GCC installation: /usr/lib/gcc/x86_64-pc-linux-gnu/10.2.0
Found candidate GCC installation: /usr/lib/gcc/x86_64-pc-linux-gnu/8.4.0
Found candidate GCC installation: /usr/lib64/gcc/x86_64-pc-linux-gnu/10.2.0
Found candidate GCC installation: /usr/lib64/gcc/x86_64-pc-linux-gnu/8.4.0
Selected GCC installation: /usr/bin/../lib64/gcc/x86_64-pc-linux-gnu/10.2.0
Candidate multilib: .;@m64
Candidate multilib: 32;@m32
Selected multilib: .;@m64

meson  -Dexamples=l3fwd --buildtype=debugoptimized --werror
--default-library=static /export/dpdk-next-eventdev/devtools/..
./build-clang-static



ccache clang -Idrivers/libtmp_rte_pmd_dlb2_event.a.p -Idrivers
-I../drivers -Idrivers/event/dlb2 -I../drivers/event/dlb2
-Ilib/librte_eventdev -I../lib/librte_eventdev -I. -I.. -Iconfig
-I../config -Ilib/librte_eal/include -I../lib/librte_e
al/include -Ilib/librte_eal/linux/include
-I../lib/librte_eal/linux/include -Ilib/librte_eal/x86/include
-I../lib/librte_eal/x86/include -Ilib/librte_eal/common
-I../lib/librte_eal/common -Ilib/librte_eal -I../lib/librte_eal
-Ilib/librte_kv
args -I../lib/librte_kvargs -Ilib/librte_metrics
-I../lib/librte_metrics -Ilib/librte_telemetry
-I../lib/librte_telemetry -Ilib/librte_ring -I../lib/librte_ring
-Ilib/librte_ethdev -I../lib/librte_ethdev -Ilib/librte_net
-I../lib/librte_net
 -Ilib/librte_mbuf -I../lib/librte_mbuf -Ilib/librte_mempool
-I../lib/librte_mempool -Ilib/librte_meter -I../lib/librte_meter
-Ilib/librte_hash -I../lib/librte_hash -Ilib/librte_timer
-I../lib/librte_timer -Ilib/librte_cryptodev -I../lib/li
brte_cryptodev -Ilib/librte_pci -I../lib/librte_pci -Idrivers/bus/pci
-I../drivers/bus/pci -I../drivers/bus/pci/linux -Xclang
-fcolor-diagnostics -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch
-Werror -O2 -g -include rte_config.h -Wextra
-Wcast-qual -Wdeprecated -Wformat-nonliteral -Wformat-security
-Wmissing-declarations -Wmissing-prototypes -Wnested-externs
-Wold-style-definition -Wpointer-arith -Wsign-compare
-Wstrict-prototypes -Wundef -Wwrite-strings -Wno-address-of-pa
cked-member -Wno-missing-field-initializers -D_GNU_SOURCE -fPIC
-march=native -DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -MD -MQ
drivers/libtmp_rte_pmd_dlb2_event.a.p/event_dlb2_pf_dlb2_pf.c.o -MF
drivers/libtmp_rte_pmd_dlb2_event.a.p/ev
ent_dlb2_pf_dlb2_pf.c.o.d -o
drivers/libtmp_rte_pmd_dlb2_event.a.p/event_dlb2_pf_dlb2_pf.c.o -c
../drivers/event/dlb2/pf/dlb2_pf.c
In file included from ../drivers/event/dlb2/pf/dlb2_pf.c:36:
../drivers/event/dlb2/pf/../dlb2_inline_fns.h:45:2: error: use of
unknown builtin '__builtin_ia32_movntdq'
[-Wimplicit-function-declaration]
        __builtin_ia32_movntdq((__v2di *)pp_addr + 0, (__v2di)src_data0);
        ^
../drivers/event/dlb2/pf/../dlb2_inline_fns.h:45:2: note: did you mean
'__builtin_ia32_movntq'?
/usr/lib/clang/10.0.1/include/xmmintrin.h:2122:3: note:
'__builtin_ia32_movntq' declared here
  __builtin_ia32_movntq(__p, __a);
  ^
In file included from ../drivers/event/dlb2/pf/dlb2_pf.c:36:
../drivers/event/dlb2/pf/../dlb2_inline_fns.h:61:2: error: use of
unknown builtin '__builtin_ia32_movntdq'
[-Wimplicit-function-declaration]
        __builtin_ia32_movntdq((__v2di *)pp_addr, (__v2di)src_data0);
        ^
2 errors generated.
[2124/2479] Compiling C object
drivers/libtmp_rte_pmd_dlb2_event.a.p/event_dlb2_dlb2.c.o
FAILED: drivers/libtmp_rte_pmd_dlb2_event.a.p/event_dlb2_dlb2.c.o
ccache clang -Idrivers/libtmp_rte_pmd_dlb2_event.a.p -Idrivers
-I../drivers -Idrivers/event/dlb2 -I../drivers/event/dlb2
-Ilib/librte_eventdev -I../lib/librte_eventdev -I. -I.. -Iconfig
-I../config -Ilib/librte_eal/include -I../lib/librte_e
al/include -Ilib/librte_eal/linux/include
-I../lib/librte_eal/linux/include -Ilib/librte_eal/x86/include
-I../lib/librte_eal/x86/include -Ilib/librte_eal/common
-I../lib/librte_eal/common -Ilib/librte_eal -I../lib/librte_eal
-Ilib/librte_kv
args -I../lib/librte_kvargs -Ilib/librte_metrics
-I../lib/librte_metrics -Ilib/librte_telemetry
-I../lib/librte_telemetry -Ilib/librte_ring -I../lib/librte_ring
-Ilib/librte_ethdev -I../lib/librte_ethdev -Ilib/librte_net
-I../lib/librte_net
 -Ilib/librte_mbuf -I../lib/librte_mbuf -Ilib/librte_mempool
-I../lib/librte_mempool -Ilib/librte_meter -I../lib/librte_meter
-Ilib/librte_hash -I../lib/librte_hash -Ilib/librte_timer
-I../lib/librte_timer -Ilib/librte_cryptodev -I../lib/li
brte_cryptodev -Ilib/librte_pci -I../lib/librte_pci -Idrivers/bus/pci
-I../drivers/bus/pci -I../drivers/bus/pci/linux -Xclang
-fcolor-diagnostics -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch
-Werror -O2 -g -include rte_config.h -Wextra
-Wcast-qual -Wdeprecated -Wformat-nonliteral -Wformat-security
-Wmissing-declarations -Wmissing-prototypes -Wnested-externs
-Wold-style-definition -Wpointer-arith -Wsign-compare
-Wstrict-prototypes -Wundef -Wwrite-strings -Wno-address-of-pa
cked-member -Wno-missing-field-initializers -D_GNU_SOURCE -fPIC
-march=native -DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -MD -MQ
drivers/libtmp_rte_pmd_dlb2_event.a.p/event_dlb2_dlb2.c.o -MF
drivers/libtmp_rte_pmd_dlb2_event.a.p/event_dl
b2_dlb2.c.o.d -o
drivers/libtmp_rte_pmd_dlb2_event.a.p/event_dlb2_dlb2.c.o -c
../drivers/event/dlb2/dlb2.c
In file included from ../drivers/event/dlb2/dlb2.c:35:
../drivers/event/dlb2/dlb2_inline_fns.h:45:2: error: use of unknown
builtin '__builtin_ia32_movntdq' [-Wimplicit-function-declaration]
        __builtin_ia32_movntdq((__v2di *)pp_addr + 0, (__v2di)src_data0);
        ^
../drivers/event/dlb2/dlb2_inline_fns.h:45:2: note: did you mean
'__builtin_ia32_movntq'?
/usr/lib/clang/10.0.1/include/xmmintrin.h:2122:3: note:
'__builtin_ia32_movntq' declared here
  __builtin_ia32_movntq(__p, __a);
  ^
In file included from ../drivers/event/dlb2/dlb2.c:35:
../drivers/event/dlb2/dlb2_inline_fns.h:61:2: error: use of unknown
builtin '__builtin_ia32_movntdq' [-Wimplicit-function-declaration]
        __builtin_ia32_movntdq((__v2di *)pp_addr, (__v2di)src_data0);
        ^
2 errors generated.
[2125/2479] Generating rte_common_sfc_efx.sym_chk with a meson_exe.py
custom command
ninja: build stopped: subcommand failed.




> ---
>  drivers/event/dlb2/dlb2.c                      |  532 +++++
>  drivers/event/dlb2/dlb2_iface.c                |   28 +
>  drivers/event/dlb2/dlb2_iface.h                |   29 +
>  drivers/event/dlb2/meson.build                 |    6 +-
>  drivers/event/dlb2/pf/base/dlb2_hw_types.h     |  367 ++++
>  drivers/event/dlb2/pf/base/dlb2_mbox.h         |  596 ++++++
>  drivers/event/dlb2/pf/base/dlb2_osdep.h        |  247 +++
>  drivers/event/dlb2/pf/base/dlb2_osdep_bitmap.h |  440 +++++
>  drivers/event/dlb2/pf/base/dlb2_osdep_list.h   |  131 ++
>  drivers/event/dlb2/pf/base/dlb2_osdep_types.h  |   31 +
>  drivers/event/dlb2/pf/base/dlb2_regs.h         | 2527 ++++++++++++++++++++++++
>  drivers/event/dlb2/pf/base/dlb2_resource.c     |  274 +++
>  drivers/event/dlb2/pf/base/dlb2_resource.h     | 1913 ++++++++++++++++++
>  drivers/event/dlb2/pf/dlb2_main.c              |  615 ++++++
>  drivers/event/dlb2/pf/dlb2_main.h              |  106 +
>  drivers/event/dlb2/pf/dlb2_pf.c                |  244 +++
>  16 files changed, 8085 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/event/dlb2/dlb2.c
>  create mode 100644 drivers/event/dlb2/dlb2_iface.c
>  create mode 100644 drivers/event/dlb2/dlb2_iface.h
>  create mode 100644 drivers/event/dlb2/pf/base/dlb2_hw_types.h
>  create mode 100644 drivers/event/dlb2/pf/base/dlb2_mbox.h
>  create mode 100644 drivers/event/dlb2/pf/base/dlb2_osdep.h
>  create mode 100644 drivers/event/dlb2/pf/base/dlb2_osdep_bitmap.h
>  create mode 100644 drivers/event/dlb2/pf/base/dlb2_osdep_list.h
>  create mode 100644 drivers/event/dlb2/pf/base/dlb2_osdep_types.h
>  create mode 100644 drivers/event/dlb2/pf/base/dlb2_regs.h
>  create mode 100644 drivers/event/dlb2/pf/base/dlb2_resource.c
>  create mode 100644 drivers/event/dlb2/pf/base/dlb2_resource.h
>  create mode 100644 drivers/event/dlb2/pf/dlb2_main.c
>  create mode 100644 drivers/event/dlb2/pf/dlb2_main.h
>  create mode 100644 drivers/event/dlb2/pf/dlb2_pf.c
>
> diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
> new file mode 100644
> index 0000000..26985b9
> --- /dev/null
> +++ b/drivers/event/dlb2/dlb2.c
> @@ -0,0 +1,532 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2016-2020 Intel Corporation
> + */
> +
> +#include <assert.h>
> +#include <errno.h>
> +#include <nmmintrin.h>
> +#include <pthread.h>
> +#include <stdint.h>
> +#include <stdbool.h>
> +#include <stdio.h>
> +#include <string.h>
> +#include <sys/mman.h>
> +#include <sys/fcntl.h>
> +
> +#include <rte_common.h>
> +#include <rte_config.h>
> +#include <rte_cycles.h>
> +#include <rte_debug.h>
> +#include <rte_dev.h>
> +#include <rte_errno.h>
> +#include <rte_eventdev.h>
> +#include <rte_eventdev_pmd.h>
> +#include <rte_io.h>
> +#include <rte_kvargs.h>
> +#include <rte_log.h>
> +#include <rte_malloc.h>
> +#include <rte_mbuf.h>
> +#include <rte_prefetch.h>
> +#include <rte_ring.h>
> +#include <rte_string_fns.h>
> +
> +#include "dlb2_priv.h"
> +#include "dlb2_iface.h"
> +#include "dlb2_inline_fns.h"
> +
> +#if !defined RTE_ARCH_X86_64
> +#error "This implementation only supports RTE_ARCH_X86_64 architecture."
> +#endif
> +
> +/*
> + * Resources exposed to eventdev. Some values overridden at runtime using
> + * values returned by the DLB kernel driver.
> + */
> +#if (RTE_EVENT_MAX_QUEUES_PER_DEV > UINT8_MAX)
> +#error "RTE_EVENT_MAX_QUEUES_PER_DEV cannot fit in member max_event_queues"
> +#endif
> +static struct rte_event_dev_info evdev_dlb2_default_info = {
> +       .driver_name = "", /* probe will set */
> +       .min_dequeue_timeout_ns = DLB2_MIN_DEQUEUE_TIMEOUT_NS,
> +       .max_dequeue_timeout_ns = DLB2_MAX_DEQUEUE_TIMEOUT_NS,
> +#if (RTE_EVENT_MAX_QUEUES_PER_DEV < DLB2_MAX_NUM_LDB_QUEUES)
> +       .max_event_queues = RTE_EVENT_MAX_QUEUES_PER_DEV,
> +#else
> +       .max_event_queues = DLB2_MAX_NUM_LDB_QUEUES,
> +#endif
> +       .max_event_queue_flows = DLB2_MAX_NUM_FLOWS,
> +       .max_event_queue_priority_levels = DLB2_QID_PRIORITIES,
> +       .max_event_priority_levels = DLB2_QID_PRIORITIES,
> +       .max_event_ports = DLB2_MAX_NUM_LDB_PORTS,
> +       .max_event_port_dequeue_depth = DLB2_MAX_CQ_DEPTH,
> +       .max_event_port_enqueue_depth = DLB2_MAX_ENQUEUE_DEPTH,
> +       .max_event_port_links = DLB2_MAX_NUM_QIDS_PER_LDB_CQ,
> +       .max_num_events = DLB2_MAX_NUM_LDB_CREDITS,
> +       .max_single_link_event_port_queue_pairs = DLB2_MAX_NUM_DIR_PORTS,
> +       .event_dev_cap = (RTE_EVENT_DEV_CAP_QUEUE_QOS |
> +                         RTE_EVENT_DEV_CAP_EVENT_QOS |
> +                         RTE_EVENT_DEV_CAP_BURST_MODE |
> +                         RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
> +                         RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE |
> +                         RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES),
> +};
> +
> +struct process_local_port_data
> +dlb2_port[DLB2_MAX_NUM_PORTS][DLB2_NUM_PORT_TYPES];
> +
> +/* override defaults with value(s) provided on command line */
> +static void
> +dlb2_init_queue_depth_thresholds(struct dlb2_eventdev *dlb2,
> +                                int *qid_depth_thresholds)
> +{
> +       int q;
> +
> +       for (q = 0; q < DLB2_MAX_NUM_QUEUES; q++) {
> +               if (qid_depth_thresholds[q] != 0)
> +                       dlb2->ev_queues[q].depth_threshold =
> +                               qid_depth_thresholds[q];
> +       }
> +}
> +
> +static int
> +dlb2_hw_query_resources(struct dlb2_eventdev *dlb2)
> +{
> +       struct dlb2_hw_dev *handle = &dlb2->qm_instance;
> +       struct dlb2_hw_resource_info *dlb2_info = &handle->info;
> +       int ret;
> +
> +       /* Query driver resources provisioned for this device */
> +
> +       ret = dlb2_iface_get_num_resources(handle,
> +                                          &dlb2->hw_rsrc_query_results);
> +       if (ret) {
> +               DLB2_LOG_ERR("ioctl get dlb2 num resources, err=%d\n", ret);
> +               return ret;
> +       }
> +
> +       /* Complete filling in device resource info returned to evdev app,
> +        * overriding any default values.
> +        * The capabilities (CAPs) were set at compile time.
> +        */
> +
> +       evdev_dlb2_default_info.max_event_queues =
> +               dlb2->hw_rsrc_query_results.num_ldb_queues;
> +
> +       evdev_dlb2_default_info.max_event_ports =
> +               dlb2->hw_rsrc_query_results.num_ldb_ports;
> +
> +       evdev_dlb2_default_info.max_num_events =
> +               dlb2->hw_rsrc_query_results.num_ldb_credits;
> +
> +       /* Save off values used when creating the scheduling domain. */
> +
> +       handle->info.num_sched_domains =
> +               dlb2->hw_rsrc_query_results.num_sched_domains;
> +
> +       handle->info.hw_rsrc_max.nb_events_limit =
> +               dlb2->hw_rsrc_query_results.num_ldb_credits;
> +
> +       handle->info.hw_rsrc_max.num_queues =
> +               dlb2->hw_rsrc_query_results.num_ldb_queues +
> +               dlb2->hw_rsrc_query_results.num_dir_ports;
> +
> +       handle->info.hw_rsrc_max.num_ldb_queues =
> +               dlb2->hw_rsrc_query_results.num_ldb_queues;
> +
> +       handle->info.hw_rsrc_max.num_ldb_ports =
> +               dlb2->hw_rsrc_query_results.num_ldb_ports;
> +
> +       handle->info.hw_rsrc_max.num_dir_ports =
> +               dlb2->hw_rsrc_query_results.num_dir_ports;
> +
> +       handle->info.hw_rsrc_max.reorder_window_size =
> +               dlb2->hw_rsrc_query_results.num_hist_list_entries;
> +
> +       rte_memcpy(dlb2_info, &handle->info.hw_rsrc_max, sizeof(*dlb2_info));
> +
> +       return 0;
> +}
> +
> +#define RTE_BASE_10 10
> +
> +static int
> +dlb2_string_to_int(int *result, const char *str)
> +{
> +       long ret;
> +       char *endptr;
> +
> +       if (str == NULL || result == NULL)
> +               return -EINVAL;
> +
> +       errno = 0;
> +       ret = strtol(str, &endptr, RTE_BASE_10);
> +       if (errno)
> +               return -errno;
> +
> +       /* long int and int may be different width for some architectures */
> +       if (ret < INT_MIN || ret > INT_MAX || endptr == str)
> +               return -EINVAL;
> +
> +       *result = ret;
> +       return 0;
> +}
> +
> +static int
> +set_numa_node(const char *key __rte_unused, const char *value, void *opaque)
> +{
> +       int *socket_id = opaque;
> +       int ret;
> +
> +       ret = dlb2_string_to_int(socket_id, value);
> +       if (ret < 0)
> +               return ret;
> +
> +       if (*socket_id > RTE_MAX_NUMA_NODES)
> +               return -EINVAL;
> +       return 0;
> +}
> +
> +static int
> +set_max_num_events(const char *key __rte_unused,
> +                  const char *value,
> +                  void *opaque)
> +{
> +       int *max_num_events = opaque;
> +       int ret;
> +
> +       if (value == NULL || opaque == NULL) {
> +               DLB2_LOG_ERR("NULL pointer\n");
> +               return -EINVAL;
> +       }
> +
> +       ret = dlb2_string_to_int(max_num_events, value);
> +       if (ret < 0)
> +               return ret;
> +
> +       if (*max_num_events < 0 || *max_num_events >
> +                       DLB2_MAX_NUM_LDB_CREDITS) {
> +               DLB2_LOG_ERR("dlb2: max_num_events must be between 0 and %d\n",
> +                            DLB2_MAX_NUM_LDB_CREDITS);
> +               return -EINVAL;
> +       }
> +
> +       return 0;
> +}
> +
> +static int
> +set_num_dir_credits(const char *key __rte_unused,
> +                   const char *value,
> +                   void *opaque)
> +{
> +       int *num_dir_credits = opaque;
> +       int ret;
> +
> +       if (value == NULL || opaque == NULL) {
> +               DLB2_LOG_ERR("NULL pointer\n");
> +               return -EINVAL;
> +       }
> +
> +       ret = dlb2_string_to_int(num_dir_credits, value);
> +       if (ret < 0)
> +               return ret;
> +
> +       if (*num_dir_credits < 0 ||
> +           *num_dir_credits > DLB2_MAX_NUM_DIR_CREDITS) {
> +               DLB2_LOG_ERR("dlb2: num_dir_credits must be between 0 and %d\n",
> +                            DLB2_MAX_NUM_DIR_CREDITS);
> +               return -EINVAL;
> +       }
> +
> +       return 0;
> +}
> +
> +static int
> +set_dev_id(const char *key __rte_unused,
> +          const char *value,
> +          void *opaque)
> +{
> +       int *dev_id = opaque;
> +       int ret;
> +
> +       if (value == NULL || opaque == NULL) {
> +               DLB2_LOG_ERR("NULL pointer\n");
> +               return -EINVAL;
> +       }
> +
> +       ret = dlb2_string_to_int(dev_id, value);
> +       if (ret < 0)
> +               return ret;
> +
> +       return 0;
> +}
> +
> +static int
> +set_cos(const char *key __rte_unused,
> +       const char *value,
> +       void *opaque)
> +{
> +       enum dlb2_cos *cos_id = opaque;
> +       int x = 0;
> +       int ret;
> +
> +       if (value == NULL || opaque == NULL) {
> +               DLB2_LOG_ERR("NULL pointer\n");
> +               return -EINVAL;
> +       }
> +
> +       ret = dlb2_string_to_int(&x, value);
> +       if (ret < 0)
> +               return ret;
> +
> +       if (x != DLB2_COS_DEFAULT && (x < DLB2_COS_0 || x > DLB2_COS_3)) {
> +               DLB2_LOG_ERR(
> +                       "COS %d out of range, must be DLB2_COS_DEFAULT or 0-3\n",
> +                       x);
> +               return -EINVAL;
> +       }
> +
> +       *cos_id = x;
> +
> +       return 0;
> +}
> +
> +
> +static int
> +set_qid_depth_thresh(const char *key __rte_unused,
> +                    const char *value,
> +                    void *opaque)
> +{
> +       struct dlb2_qid_depth_thresholds *qid_thresh = opaque;
> +       int first, last, thresh, i;
> +
> +       if (value == NULL || opaque == NULL) {
> +               DLB2_LOG_ERR("NULL pointer\n");
> +               return -EINVAL;
> +       }
> +
> +       /* command line override may take one of the following 3 forms:
> +        * qid_depth_thresh=all:<threshold_value> ... all queues
> +        * qid_depth_thresh=qidA-qidB:<threshold_value> ... a range of queues
> +        * qid_depth_thresh=qid:<threshold_value> ... just one queue
> +        */
> +       if (sscanf(value, "all:%d", &thresh) == 1) {
> +               first = 0;
> +               last = DLB2_MAX_NUM_QUEUES - 1;
> +       } else if (sscanf(value, "%d-%d:%d", &first, &last, &thresh) == 3) {
> +               /* we have everything we need */
> +       } else if (sscanf(value, "%d:%d", &first, &thresh) == 2) {
> +               last = first;
> +       } else {
> +               DLB2_LOG_ERR("Error parsing qid depth devarg. Should be all:val, qid-qid:val, or qid:val\n");
> +               return -EINVAL;
> +       }
> +
> +       if (first > last || first < 0 || last >= DLB2_MAX_NUM_QUEUES) {
> +               DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value\n");
> +               return -EINVAL;
> +       }
> +
> +       if (thresh < 0 || thresh > DLB2_MAX_QUEUE_DEPTH_THRESHOLD) {
> +               DLB2_LOG_ERR("Error parsing qid depth devarg, threshold > %d\n",
> +                            DLB2_MAX_QUEUE_DEPTH_THRESHOLD);
> +               return -EINVAL;
> +       }
> +
> +       for (i = first; i <= last; i++)
> +               qid_thresh->val[i] = thresh; /* indexed by qid */
> +
> +       return 0;
> +}
> +
> +static void
> +dlb2_entry_points_init(struct rte_eventdev *dev)
> +{
> +       RTE_SET_USED(dev);
> +
> +       /* Eventdev PMD entry points */
> +}
> +
> +int
> +dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
> +                           const char *name,
> +                           struct dlb2_devargs *dlb2_args)
> +{
> +       struct dlb2_eventdev *dlb2;
> +       int err;
> +
> +       dlb2 = dev->data->dev_private;
> +
> +       dlb2->event_dev = dev; /* backlink */
> +
> +       evdev_dlb2_default_info.driver_name = name;
> +
> +       dlb2->max_num_events_override = dlb2_args->max_num_events;
> +       dlb2->num_dir_credits_override = dlb2_args->num_dir_credits_override;
> +       dlb2->qm_instance.cos_id = dlb2_args->cos_id;
> +
> +       err = dlb2_iface_open(&dlb2->qm_instance, name);
> +       if (err < 0) {
> +               DLB2_LOG_ERR("could not open event hardware device, err=%d\n",
> +                            err);
> +               return err;
> +       }
> +
> +       err = dlb2_iface_get_device_version(&dlb2->qm_instance,
> +                                           &dlb2->revision);
> +       if (err < 0) {
> +               DLB2_LOG_ERR("dlb2: failed to get the device version, err=%d\n",
> +                            err);
> +               return err;
> +       }
> +
> +       err = dlb2_hw_query_resources(dlb2);
> +       if (err) {
> +               DLB2_LOG_ERR("get resources err=%d for %s\n",
> +                            err, name);
> +               return err;
> +       }
> +
> +       dlb2_iface_hardware_init(&dlb2->qm_instance);
> +
> +       err = dlb2_iface_get_cq_poll_mode(&dlb2->qm_instance, &dlb2->poll_mode);
> +       if (err < 0) {
> +               DLB2_LOG_ERR("dlb2: failed to get the poll mode, err=%d\n",
> +                            err);
> +               return err;
> +       }
> +
> +       rte_spinlock_init(&dlb2->qm_instance.resource_lock);
> +
> +       dlb2_iface_low_level_io_init();
> +
> +       dlb2_entry_points_init(dev);
> +
> +       dlb2_init_queue_depth_thresholds(dlb2,
> +                                        dlb2_args->qid_depth_thresholds.val);
> +
> +       return 0;
> +}
> +
> +int
> +dlb2_secondary_eventdev_probe(struct rte_eventdev *dev,
> +                             const char *name)
> +{
> +       struct dlb2_eventdev *dlb2;
> +       int err;
> +
> +       dlb2 = dev->data->dev_private;
> +
> +       evdev_dlb2_default_info.driver_name = name;
> +
> +       err = dlb2_iface_open(&dlb2->qm_instance, name);
> +       if (err < 0) {
> +               DLB2_LOG_ERR("could not open event hardware device, err=%d\n",
> +                            err);
> +               return err;
> +       }
> +
> +       err = dlb2_hw_query_resources(dlb2);
> +       if (err) {
> +               DLB2_LOG_ERR("get resources err=%d for %s\n",
> +                            err, name);
> +               return err;
> +       }
> +
> +       dlb2_iface_low_level_io_init();
> +
> +       dlb2_entry_points_init(dev);
> +
> +       return 0;
> +}
> +
> +int
> +dlb2_parse_params(const char *params,
> +                 const char *name,
> +                 struct dlb2_devargs *dlb2_args)
> +{
> +       int ret = 0;
> +       static const char * const args[] = { NUMA_NODE_ARG,
> +                                            DLB2_MAX_NUM_EVENTS,
> +                                            DLB2_NUM_DIR_CREDITS,
> +                                            DEV_ID_ARG,
> +                                            DLB2_QID_DEPTH_THRESH_ARG,
> +                                            DLB2_COS_ARG,
> +                                            NULL };
> +
> +       if (params != NULL && params[0] != '\0') {
> +               struct rte_kvargs *kvlist = rte_kvargs_parse(params, args);
> +
> +               if (kvlist == NULL) {
> +                       RTE_LOG(INFO, PMD,
> +                               "Ignoring unsupported parameters when creating device '%s'\n",
> +                               name);
> +               } else {
> +                       int ret = rte_kvargs_process(kvlist, NUMA_NODE_ARG,
> +                                                    set_numa_node,
> +                                                    &dlb2_args->socket_id);
> +                       if (ret != 0) {
> +                               DLB2_LOG_ERR("%s: Error parsing numa node parameter",
> +                                            name);
> +                               rte_kvargs_free(kvlist);
> +                               return ret;
> +                       }
> +
> +                       ret = rte_kvargs_process(kvlist, DLB2_MAX_NUM_EVENTS,
> +                                                set_max_num_events,
> +                                                &dlb2_args->max_num_events);
> +                       if (ret != 0) {
> +                               DLB2_LOG_ERR("%s: Error parsing max_num_events parameter",
> +                                            name);
> +                               rte_kvargs_free(kvlist);
> +                               return ret;
> +                       }
> +
> +                       ret = rte_kvargs_process(kvlist,
> +                                       DLB2_NUM_DIR_CREDITS,
> +                                       set_num_dir_credits,
> +                                       &dlb2_args->num_dir_credits_override);
> +                       if (ret != 0) {
> +                               DLB2_LOG_ERR("%s: Error parsing num_dir_credits parameter",
> +                                            name);
> +                               rte_kvargs_free(kvlist);
> +                               return ret;
> +                       }
> +
> +                       ret = rte_kvargs_process(kvlist, DEV_ID_ARG,
> +                                                set_dev_id,
> +                                                &dlb2_args->dev_id);
> +                       if (ret != 0) {
> +                               DLB2_LOG_ERR("%s: Error parsing dev_id parameter",
> +                                            name);
> +                               rte_kvargs_free(kvlist);
> +                               return ret;
> +                       }
> +
> +                       ret = rte_kvargs_process(
> +                                       kvlist,
> +                                       DLB2_QID_DEPTH_THRESH_ARG,
> +                                       set_qid_depth_thresh,
> +                                       &dlb2_args->qid_depth_thresholds);
> +                       if (ret != 0) {
> +                               DLB2_LOG_ERR("%s: Error parsing qid_depth_thresh parameter",
> +                                            name);
> +                               rte_kvargs_free(kvlist);
> +                               return ret;
> +                       }
> +
> +                       ret = rte_kvargs_process(kvlist, DLB2_COS_ARG,
> +                                                set_cos,
> +                                                &dlb2_args->cos_id);
> +                       if (ret != 0) {
> +                               DLB2_LOG_ERR("%s: Error parsing cos parameter",
> +                                            name);
> +                               rte_kvargs_free(kvlist);
> +                               return ret;
> +                       }
> +
> +                       rte_kvargs_free(kvlist);
> +               }
> +       }
> +       return ret;
> +}
> +RTE_LOG_REGISTER(eventdev_dlb2_log_level, pmd.event.dlb2, NOTICE);
> diff --git a/drivers/event/dlb2/dlb2_iface.c b/drivers/event/dlb2/dlb2_iface.c
> new file mode 100644
> index 0000000..0d93faf
> --- /dev/null
> +++ b/drivers/event/dlb2/dlb2_iface.c
> @@ -0,0 +1,28 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2016-2020 Intel Corporation
> + */
> +
> +#include <stdint.h>
> +
> +#include "dlb2_priv.h"
> +
> +/* DLB2 PMD Internal interface function pointers.
> + * If VDEV (bifurcated PMD),  these will resolve to functions that issue ioctls
> + * serviced by DLB kernel module.
> + * If PCI (PF PMD),  these will be implemented locally in user mode.
> + */
> +
> +void (*dlb2_iface_low_level_io_init)(void);
> +
> +int (*dlb2_iface_open)(struct dlb2_hw_dev *handle, const char *name);
> +
> +int (*dlb2_iface_get_device_version)(struct dlb2_hw_dev *handle,
> +                                    uint8_t *revision);
> +
> +void (*dlb2_iface_hardware_init)(struct dlb2_hw_dev *handle);
> +
> +int (*dlb2_iface_get_cq_poll_mode)(struct dlb2_hw_dev *handle,
> +                                  enum dlb2_cq_poll_modes *mode);
> +
> +int (*dlb2_iface_get_num_resources)(struct dlb2_hw_dev *handle,
> +                                   struct dlb2_get_num_resources_args *rsrcs);
> diff --git a/drivers/event/dlb2/dlb2_iface.h b/drivers/event/dlb2/dlb2_iface.h
> new file mode 100644
> index 0000000..4fb416e
> --- /dev/null
> +++ b/drivers/event/dlb2/dlb2_iface.h
> @@ -0,0 +1,29 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2016-2020 Intel Corporation
> + */
> +
> +#ifndef _DLB2_IFACE_H_
> +#define _DLB2_IFACE_H_
> +
> +/* DLB2 PMD Internal interface function pointers.
> + * If VDEV (bifurcated PMD),  these will resolve to functions that issue ioctls
> + * serviced by DLB kernel module.
> + * If PCI (PF PMD),  these will be implemented locally in user mode.
> + */
> +
> +extern void (*dlb2_iface_low_level_io_init)(void);
> +
> +extern int (*dlb2_iface_open)(struct dlb2_hw_dev *handle, const char *name);
> +
> +extern int (*dlb2_iface_get_device_version)(struct dlb2_hw_dev *handle,
> +                                           uint8_t *revision);
> +
> +extern void (*dlb2_iface_hardware_init)(struct dlb2_hw_dev *handle);
> +
> +extern int (*dlb2_iface_get_cq_poll_mode)(struct dlb2_hw_dev *handle,
> +                                         enum dlb2_cq_poll_modes *mode);
> +
> +extern int (*dlb2_iface_get_num_resources)(struct dlb2_hw_dev *handle,
> +                               struct dlb2_get_num_resources_args *rsrcs);
> +
> +#endif /* _DLB2_IFACE_H_ */
> diff --git a/drivers/event/dlb2/meson.build b/drivers/event/dlb2/meson.build
> index 54ba2c8..99b71f9 100644
> --- a/drivers/event/dlb2/meson.build
> +++ b/drivers/event/dlb2/meson.build
> @@ -1,7 +1,11 @@
>  # SPDX-License-Identifier: BSD-3-Clause
>  # Copyright(c) 2019-2020 Intel Corporation
>
> -sources = files(
> +sources = files('dlb2.c',
> +               'dlb2_iface.c',
> +               'pf/dlb2_main.c',
> +               'pf/dlb2_pf.c',
> +               'pf/base/dlb2_resource.c'
>  )
>
>  deps += ['mbuf', 'mempool', 'ring', 'pci', 'bus_pci']
> diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types.h b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
> new file mode 100644
> index 0000000..428a5e8
> --- /dev/null
> +++ b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
> @@ -0,0 +1,367 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2016-2020 Intel Corporation
> + */
> +
> +#ifndef __DLB2_HW_TYPES_H
> +#define __DLB2_HW_TYPES_H
> +
> +#include "dlb2_user.h"
> +
> +#include "dlb2_osdep_list.h"
> +#include "dlb2_osdep_types.h"
> +
> +#define DLB2_MAX_NUM_VDEVS                     16
> +#define DLB2_MAX_NUM_DOMAINS                   32
> +#define DLB2_MAX_NUM_LDB_QUEUES                        32 /* LDB == load-balanced */
> +#define DLB2_MAX_NUM_DIR_QUEUES                        64 /* DIR == directed */
> +#define DLB2_MAX_NUM_LDB_PORTS                 64
> +#define DLB2_MAX_NUM_DIR_PORTS                 64
> +#define DLB2_MAX_NUM_LDB_CREDITS               (8 * 1024)
> +#define DLB2_MAX_NUM_DIR_CREDITS               (2 * 1024)
> +#define DLB2_MAX_NUM_HIST_LIST_ENTRIES         2048
> +#define DLB2_MAX_NUM_AQED_ENTRIES              2048
> +#define DLB2_MAX_NUM_QIDS_PER_LDB_CQ           8
> +#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS    2
> +#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES     5
> +#define DLB2_QID_PRIORITIES                    8
> +#define DLB2_NUM_ARB_WEIGHTS                   8
> +#define DLB2_MAX_WEIGHT                                255
> +#define DLB2_NUM_COS_DOMAINS                   4
> +#define DLB2_MAX_CQ_COMP_CHECK_LOOPS           409600
> +#define DLB2_MAX_QID_EMPTY_CHECK_LOOPS         (32 * 64 * 1024 * (800 / 30))
> +#ifdef FPGA
> +#define DLB2_HZ                                        2000000
> +#else
> +#define DLB2_HZ                                        800000000
> +#endif
> +
> +#define PCI_DEVICE_ID_INTEL_DLB2_PF 0x2710
> +#define PCI_DEVICE_ID_INTEL_DLB2_VF 0x2711
> +
> +/* Interrupt related macros */
> +#define DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS 1
> +#define DLB2_PF_NUM_CQ_INTERRUPT_VECTORS     64
> +#define DLB2_PF_TOTAL_NUM_INTERRUPT_VECTORS \
> +       (DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS + \
> +        DLB2_PF_NUM_CQ_INTERRUPT_VECTORS)
> +#define DLB2_PF_NUM_COMPRESSED_MODE_VECTORS \
> +       (DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS + 1)
> +#define DLB2_PF_NUM_PACKED_MODE_VECTORS \
> +       DLB2_PF_TOTAL_NUM_INTERRUPT_VECTORS
> +#define DLB2_PF_COMPRESSED_MODE_CQ_VECTOR_ID \
> +       DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS
> +
> +/* DLB non-CQ interrupts (alarm, mailbox, WDT) */
> +#define DLB2_INT_NON_CQ 0
> +
> +#define DLB2_ALARM_HW_SOURCE_SYS 0
> +#define DLB2_ALARM_HW_SOURCE_DLB 1
> +
> +#define DLB2_ALARM_HW_UNIT_CHP 4
> +
> +#define DLB2_ALARM_SYS_AID_ILLEGAL_QID         3
> +#define DLB2_ALARM_SYS_AID_DISABLED_QID                4
> +#define DLB2_ALARM_SYS_AID_ILLEGAL_HCW         5
> +#define DLB2_ALARM_HW_CHP_AID_ILLEGAL_ENQ      1
> +#define DLB2_ALARM_HW_CHP_AID_EXCESS_TOKEN_POPS 2
> +
> +#define DLB2_VF_NUM_NON_CQ_INTERRUPT_VECTORS 1
> +#define DLB2_VF_NUM_CQ_INTERRUPT_VECTORS     31
> +#define DLB2_VF_BASE_CQ_VECTOR_ID           0
> +#define DLB2_VF_LAST_CQ_VECTOR_ID           30
> +#define DLB2_VF_MBOX_VECTOR_ID              31
> +#define DLB2_VF_TOTAL_NUM_INTERRUPT_VECTORS \
> +       (DLB2_VF_NUM_NON_CQ_INTERRUPT_VECTORS + \
> +        DLB2_VF_NUM_CQ_INTERRUPT_VECTORS)
> +
> +#define DLB2_VDEV_MAX_NUM_INTERRUPT_VECTORS (DLB2_MAX_NUM_LDB_PORTS + \
> +                                            DLB2_MAX_NUM_DIR_PORTS + 1)
> +
> +/*
> + * Hardware-defined base addresses. Those prefixed 'DLB2_DRV' are only used by
> + * the PF driver.
> + */
> +#define DLB2_DRV_LDB_PP_BASE   0x2300000
> +#define DLB2_DRV_LDB_PP_STRIDE 0x1000
> +#define DLB2_DRV_LDB_PP_BOUND  (DLB2_DRV_LDB_PP_BASE + \
> +                               DLB2_DRV_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
> +#define DLB2_DRV_DIR_PP_BASE   0x2200000
> +#define DLB2_DRV_DIR_PP_STRIDE 0x1000
> +#define DLB2_DRV_DIR_PP_BOUND  (DLB2_DRV_DIR_PP_BASE + \
> +                               DLB2_DRV_DIR_PP_STRIDE * DLB2_MAX_NUM_DIR_PORTS)
> +#define DLB2_LDB_PP_BASE       0x2100000
> +#define DLB2_LDB_PP_STRIDE     0x1000
> +#define DLB2_LDB_PP_BOUND      (DLB2_LDB_PP_BASE + \
> +                               DLB2_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
> +#define DLB2_LDB_PP_OFFS(id)   (DLB2_LDB_PP_BASE + (id) * DLB2_PP_SIZE)
> +#define DLB2_DIR_PP_BASE       0x2000000
> +#define DLB2_DIR_PP_STRIDE     0x1000
> +#define DLB2_DIR_PP_BOUND      (DLB2_DIR_PP_BASE + \
> +                               DLB2_DIR_PP_STRIDE * DLB2_MAX_NUM_DIR_PORTS)
> +#define DLB2_DIR_PP_OFFS(id)   (DLB2_DIR_PP_BASE + (id) * DLB2_PP_SIZE)
> +
> +struct dlb2_resource_id {
> +       u32 phys_id;
> +       u32 virt_id;
> +       u8 vdev_owned;
> +       u8 vdev_id;
> +};
> +
> +struct dlb2_freelist {
> +       u32 base;
> +       u32 bound;
> +       u32 offset;
> +};
> +
> +static inline u32 dlb2_freelist_count(struct dlb2_freelist *list)
> +{
> +       return list->bound - list->base - list->offset;
> +}
> +
> +struct dlb2_hcw {
> +       u64 data;
> +       /* Word 3 */
> +       u16 opaque;
> +       u8 qid;
> +       u8 sched_type:2;
> +       u8 priority:3;
> +       u8 msg_type:3;
> +       /* Word 4 */
> +       u16 lock_id;
> +       u8 ts_flag:1;
> +       u8 rsvd1:2;
> +       u8 no_dec:1;
> +       u8 cmp_id:4;
> +       u8 cq_token:1;
> +       u8 qe_comp:1;
> +       u8 qe_frag:1;
> +       u8 qe_valid:1;
> +       u8 int_arm:1;
> +       u8 error:1;
> +       u8 rsvd:2;
> +};
> +
> +struct dlb2_ldb_queue {
> +       struct dlb2_list_entry domain_list;
> +       struct dlb2_list_entry func_list;
> +       struct dlb2_resource_id id;
> +       struct dlb2_resource_id domain_id;
> +       u32 num_qid_inflights;
> +       u32 aqed_limit;
> +       u32 sn_group; /* sn == sequence number */
> +       u32 sn_slot;
> +       u32 num_mappings;
> +       u8 sn_cfg_valid;
> +       u8 num_pending_additions;
> +       u8 owned;
> +       u8 configured;
> +};
> +
> +/*
> + * Directed ports and queues are paired by nature, so the driver tracks them
> + * with a single data structure.
> + */
> +struct dlb2_dir_pq_pair {
> +       struct dlb2_list_entry domain_list;
> +       struct dlb2_list_entry func_list;
> +       struct dlb2_resource_id id;
> +       struct dlb2_resource_id domain_id;
> +       u32 ref_cnt;
> +       u8 init_tkn_cnt;
> +       u8 queue_configured;
> +       u8 port_configured;
> +       u8 owned;
> +       u8 enabled;
> +};
> +
> +enum dlb2_qid_map_state {
> +       /* The slot doesn't contain a valid queue mapping */
> +       DLB2_QUEUE_UNMAPPED,
> +       /* The slot contains a valid queue mapping */
> +       DLB2_QUEUE_MAPPED,
> +       /* The driver is mapping a queue into this slot */
> +       DLB2_QUEUE_MAP_IN_PROG,
> +       /* The driver is unmapping a queue from this slot */
> +       DLB2_QUEUE_UNMAP_IN_PROG,
> +       /*
> +        * The driver is unmapping a queue from this slot, and once complete
> +        * will replace it with another mapping.
> +        */
> +       DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP,
> +};
> +
> +struct dlb2_ldb_port_qid_map {
> +       enum dlb2_qid_map_state state;
> +       u16 qid;
> +       u16 pending_qid;
> +       u8 priority;
> +       u8 pending_priority;
> +};
> +
> +struct dlb2_ldb_port {
> +       struct dlb2_list_entry domain_list;
> +       struct dlb2_list_entry func_list;
> +       struct dlb2_resource_id id;
> +       struct dlb2_resource_id domain_id;
> +       /* The qid_map represents the hardware QID mapping state. */
> +       struct dlb2_ldb_port_qid_map qid_map[DLB2_MAX_NUM_QIDS_PER_LDB_CQ];
> +       u32 hist_list_entry_base;
> +       u32 hist_list_entry_limit;
> +       u32 ref_cnt;
> +       u8 init_tkn_cnt;
> +       u8 num_pending_removals;
> +       u8 num_mappings;
> +       u8 owned;
> +       u8 enabled;
> +       u8 configured;
> +};
> +
> +struct dlb2_sn_group {
> +       u32 mode;
> +       u32 sequence_numbers_per_queue;
> +       u32 slot_use_bitmap;
> +       u32 id;
> +};
> +
> +static inline bool dlb2_sn_group_full(struct dlb2_sn_group *group)
> +{
> +       u32 mask[] = {
> +               0x0000ffff,  /* 64 SNs per queue */
> +               0x000000ff,  /* 128 SNs per queue */
> +               0x0000000f,  /* 256 SNs per queue */
> +               0x00000003,  /* 512 SNs per queue */
> +               0x00000001}; /* 1024 SNs per queue */
> +
> +       return group->slot_use_bitmap == mask[group->mode];
> +}
> +
> +static inline int dlb2_sn_group_alloc_slot(struct dlb2_sn_group *group)
> +{
> +       u32 bound[6] = {16, 8, 4, 2, 1};
> +       u32 i;
> +
> +       for (i = 0; i < bound[group->mode]; i++) {
> +               if (!(group->slot_use_bitmap & (1 << i))) {
> +                       group->slot_use_bitmap |= 1 << i;
> +                       return i;
> +               }
> +       }
> +
> +       return -1;
> +}
> +
> +static inline void
> +dlb2_sn_group_free_slot(struct dlb2_sn_group *group, int slot)
> +{
> +       group->slot_use_bitmap &= ~(1 << slot);
> +}
> +
> +static inline int dlb2_sn_group_used_slots(struct dlb2_sn_group *group)
> +{
> +       int i, cnt = 0;
> +
> +       for (i = 0; i < 32; i++)
> +               cnt += !!(group->slot_use_bitmap & (1 << i));
> +
> +       return cnt;
> +}
> +
> +struct dlb2_hw_domain {
> +       struct dlb2_function_resources *parent_func;
> +       struct dlb2_list_entry func_list;
> +       struct dlb2_list_head used_ldb_queues;
> +       struct dlb2_list_head used_ldb_ports[DLB2_NUM_COS_DOMAINS];
> +       struct dlb2_list_head used_dir_pq_pairs;
> +       struct dlb2_list_head avail_ldb_queues;
> +       struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
> +       struct dlb2_list_head avail_dir_pq_pairs;
> +       u32 total_hist_list_entries;
> +       u32 avail_hist_list_entries;
> +       u32 hist_list_entry_base;
> +       u32 hist_list_entry_offset;
> +       u32 num_ldb_credits;
> +       u32 num_dir_credits;
> +       u32 num_avail_aqed_entries;
> +       u32 num_used_aqed_entries;
> +       struct dlb2_resource_id id;
> +       int num_pending_removals;
> +       int num_pending_additions;
> +       u8 configured;
> +       u8 started;
> +};
> +
> +struct dlb2_bitmap;
> +
> +struct dlb2_function_resources {
> +       struct dlb2_list_head avail_domains;
> +       struct dlb2_list_head used_domains;
> +       struct dlb2_list_head avail_ldb_queues;
> +       struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
> +       struct dlb2_list_head avail_dir_pq_pairs;
> +       struct dlb2_bitmap *avail_hist_list_entries;
> +       u32 num_avail_domains;
> +       u32 num_avail_ldb_queues;
> +       u32 num_avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
> +       u32 num_avail_dir_pq_pairs;
> +       u32 num_avail_qed_entries;
> +       u32 num_avail_dqed_entries;
> +       u32 num_avail_aqed_entries;
> +       u8 locked; /* (VDEV only) */
> +};
> +
> +/*
> + * After initialization, each resource in dlb2_hw_resources is located in one
> + * of the following lists:
> + * -- The PF's available resources list. These are unconfigured resources owned
> + *     by the PF and not allocated to a dlb2 scheduling domain.
> + * -- A VDEV's available resources list. These are VDEV-owned unconfigured
> + *     resources not allocated to a dlb2 scheduling domain.
> + * -- A domain's available resources list. These are domain-owned unconfigured
> + *     resources.
> + * -- A domain's used resources list. These are are domain-owned configured
> + *     resources.
> + *
> + * A resource moves to a new list when a VDEV or domain is created or destroyed,
> + * or when the resource is configured.
> + */
> +struct dlb2_hw_resources {
> +       struct dlb2_ldb_queue ldb_queues[DLB2_MAX_NUM_LDB_QUEUES];
> +       struct dlb2_ldb_port ldb_ports[DLB2_MAX_NUM_LDB_PORTS];
> +       struct dlb2_dir_pq_pair dir_pq_pairs[DLB2_MAX_NUM_DIR_PORTS];
> +       struct dlb2_sn_group sn_groups[DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS];
> +};
> +
> +struct dlb2_mbox {
> +       u32 *mbox;
> +       u32 *isr_in_progress;
> +};
> +
> +struct dlb2_sw_mbox {
> +       struct dlb2_mbox vdev_to_pf;
> +       struct dlb2_mbox pf_to_vdev;
> +       void (*pf_to_vdev_inject)(void *arg);
> +       void *pf_to_vdev_inject_arg;
> +};
> +
> +struct dlb2_hw {
> +       /* BAR 0 address */
> +       void  *csr_kva;
> +       unsigned long csr_phys_addr;
> +       /* BAR 2 address */
> +       void  *func_kva;
> +       unsigned long func_phys_addr;
> +
> +       /* Resource tracking */
> +       struct dlb2_hw_resources rsrcs;
> +       struct dlb2_function_resources pf;
> +       struct dlb2_function_resources vdev[DLB2_MAX_NUM_VDEVS];
> +       struct dlb2_hw_domain domains[DLB2_MAX_NUM_DOMAINS];
> +       u8 cos_reservation[DLB2_NUM_COS_DOMAINS];
> +
> +       /* Virtualization */
> +       int virt_mode;
> +       struct dlb2_sw_mbox mbox[DLB2_MAX_NUM_VDEVS];
> +       unsigned int pasid[DLB2_MAX_NUM_VDEVS];
> +};
> +
> +#endif /* __DLB2_HW_TYPES_H */
> diff --git a/drivers/event/dlb2/pf/base/dlb2_mbox.h b/drivers/event/dlb2/pf/base/dlb2_mbox.h
> new file mode 100644
> index 0000000..ce462c0
> --- /dev/null
> +++ b/drivers/event/dlb2/pf/base/dlb2_mbox.h
> @@ -0,0 +1,596 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2016-2020 Intel Corporation
> + */
> +
> +#ifndef __DLB2_BASE_DLB2_MBOX_H
> +#define __DLB2_BASE_DLB2_MBOX_H
> +
> +#include "dlb2_osdep_types.h"
> +#include "dlb2_regs.h"
> +
> +#define DLB2_MBOX_INTERFACE_VERSION 1
> +
> +/*
> + * The PF uses its PF->VF mailbox to send responses to VF requests, as well as
> + * to send requests of its own (e.g. notifying a VF of an impending FLR).
> + * To avoid communication race conditions, e.g. the PF sends a response and then
> + * sends a request before the VF reads the response, the PF->VF mailbox is
> + * divided into two sections:
> + * - Bytes 0-47: PF responses
> + * - Bytes 48-63: PF requests
> + *
> + * Partitioning the PF->VF mailbox allows responses and requests to occupy the
> + * mailbox simultaneously.
> + */
> +#define DLB2_PF2VF_RESP_BYTES    48
> +#define DLB2_PF2VF_RESP_BASE     0
> +#define DLB2_PF2VF_RESP_BASE_WORD (DLB2_PF2VF_RESP_BASE / 4)
> +
> +#define DLB2_PF2VF_REQ_BYTES     16
> +#define DLB2_PF2VF_REQ_BASE      (DLB2_PF2VF_RESP_BASE + DLB2_PF2VF_RESP_BYTES)
> +#define DLB2_PF2VF_REQ_BASE_WORD  (DLB2_PF2VF_REQ_BASE / 4)
> +
> +/*
> + * Similarly, the VF->PF mailbox is divided into two sections:
> + * - Bytes 0-239: VF requests
> + * -- (Bytes 0-3 are unused due to a hardware errata)
> + * - Bytes 240-255: VF responses
> + */
> +#define DLB2_VF2PF_REQ_BYTES    236
> +#define DLB2_VF2PF_REQ_BASE     4
> +#define DLB2_VF2PF_REQ_BASE_WORD (DLB2_VF2PF_REQ_BASE / 4)
> +
> +#define DLB2_VF2PF_RESP_BYTES    16
> +#define DLB2_VF2PF_RESP_BASE     (DLB2_VF2PF_REQ_BASE + DLB2_VF2PF_REQ_BYTES)
> +#define DLB2_VF2PF_RESP_BASE_WORD (DLB2_VF2PF_RESP_BASE / 4)
> +
> +/* VF-initiated commands */
> +enum dlb2_mbox_cmd_type {
> +       DLB2_MBOX_CMD_REGISTER,
> +       DLB2_MBOX_CMD_UNREGISTER,
> +       DLB2_MBOX_CMD_GET_NUM_RESOURCES,
> +       DLB2_MBOX_CMD_CREATE_SCHED_DOMAIN,
> +       DLB2_MBOX_CMD_RESET_SCHED_DOMAIN,
> +       DLB2_MBOX_CMD_CREATE_LDB_QUEUE,
> +       DLB2_MBOX_CMD_CREATE_DIR_QUEUE,
> +       DLB2_MBOX_CMD_CREATE_LDB_PORT,
> +       DLB2_MBOX_CMD_CREATE_DIR_PORT,
> +       DLB2_MBOX_CMD_ENABLE_LDB_PORT,
> +       DLB2_MBOX_CMD_DISABLE_LDB_PORT,
> +       DLB2_MBOX_CMD_ENABLE_DIR_PORT,
> +       DLB2_MBOX_CMD_DISABLE_DIR_PORT,
> +       DLB2_MBOX_CMD_LDB_PORT_OWNED_BY_DOMAIN,
> +       DLB2_MBOX_CMD_DIR_PORT_OWNED_BY_DOMAIN,
> +       DLB2_MBOX_CMD_MAP_QID,
> +       DLB2_MBOX_CMD_UNMAP_QID,
> +       DLB2_MBOX_CMD_START_DOMAIN,
> +       DLB2_MBOX_CMD_ENABLE_LDB_PORT_INTR,
> +       DLB2_MBOX_CMD_ENABLE_DIR_PORT_INTR,
> +       DLB2_MBOX_CMD_ARM_CQ_INTR,
> +       DLB2_MBOX_CMD_GET_NUM_USED_RESOURCES,
> +       DLB2_MBOX_CMD_GET_SN_ALLOCATION,
> +       DLB2_MBOX_CMD_GET_LDB_QUEUE_DEPTH,
> +       DLB2_MBOX_CMD_GET_DIR_QUEUE_DEPTH,
> +       DLB2_MBOX_CMD_PENDING_PORT_UNMAPS,
> +       DLB2_MBOX_CMD_GET_COS_BW,
> +       DLB2_MBOX_CMD_GET_SN_OCCUPANCY,
> +       DLB2_MBOX_CMD_QUERY_CQ_POLL_MODE,
> +
> +       /* NUM_QE_CMD_TYPES must be last */
> +       NUM_DLB2_MBOX_CMD_TYPES,
> +};
> +
> +static const char dlb2_mbox_cmd_type_strings[][128] = {
> +       "DLB2_MBOX_CMD_REGISTER",
> +       "DLB2_MBOX_CMD_UNREGISTER",
> +       "DLB2_MBOX_CMD_GET_NUM_RESOURCES",
> +       "DLB2_MBOX_CMD_CREATE_SCHED_DOMAIN",
> +       "DLB2_MBOX_CMD_RESET_SCHED_DOMAIN",
> +       "DLB2_MBOX_CMD_CREATE_LDB_QUEUE",
> +       "DLB2_MBOX_CMD_CREATE_DIR_QUEUE",
> +       "DLB2_MBOX_CMD_CREATE_LDB_PORT",
> +       "DLB2_MBOX_CMD_CREATE_DIR_PORT",
> +       "DLB2_MBOX_CMD_ENABLE_LDB_PORT",
> +       "DLB2_MBOX_CMD_DISABLE_LDB_PORT",
> +       "DLB2_MBOX_CMD_ENABLE_DIR_PORT",
> +       "DLB2_MBOX_CMD_DISABLE_DIR_PORT",
> +       "DLB2_MBOX_CMD_LDB_PORT_OWNED_BY_DOMAIN",
> +       "DLB2_MBOX_CMD_DIR_PORT_OWNED_BY_DOMAIN",
> +       "DLB2_MBOX_CMD_MAP_QID",
> +       "DLB2_MBOX_CMD_UNMAP_QID",
> +       "DLB2_MBOX_CMD_START_DOMAIN",
> +       "DLB2_MBOX_CMD_ENABLE_LDB_PORT_INTR",
> +       "DLB2_MBOX_CMD_ENABLE_DIR_PORT_INTR",
> +       "DLB2_MBOX_CMD_ARM_CQ_INTR",
> +       "DLB2_MBOX_CMD_GET_NUM_USED_RESOURCES",
> +       "DLB2_MBOX_CMD_GET_SN_ALLOCATION",
> +       "DLB2_MBOX_CMD_GET_LDB_QUEUE_DEPTH",
> +       "DLB2_MBOX_CMD_GET_DIR_QUEUE_DEPTH",
> +       "DLB2_MBOX_CMD_PENDING_PORT_UNMAPS",
> +       "DLB2_MBOX_CMD_GET_COS_BW",
> +       "DLB2_MBOX_CMD_GET_SN_OCCUPANCY",
> +       "DLB2_MBOX_CMD_QUERY_CQ_POLL_MODE",
> +};
> +
> +/* PF-initiated commands */
> +enum dlb2_mbox_vf_cmd_type {
> +       DLB2_MBOX_VF_CMD_DOMAIN_ALERT,
> +       DLB2_MBOX_VF_CMD_NOTIFICATION,
> +       DLB2_MBOX_VF_CMD_IN_USE,
> +
> +       /* NUM_DLB2_MBOX_VF_CMD_TYPES must be last */
> +       NUM_DLB2_MBOX_VF_CMD_TYPES,
> +};
> +
> +static const char dlb2_mbox_vf_cmd_type_strings[][128] = {
> +       "DLB2_MBOX_VF_CMD_DOMAIN_ALERT",
> +       "DLB2_MBOX_VF_CMD_NOTIFICATION",
> +       "DLB2_MBOX_VF_CMD_IN_USE",
> +};
> +
> +#define DLB2_MBOX_CMD_TYPE(hdr) \
> +       (((struct dlb2_mbox_req_hdr *)hdr)->type)
> +#define DLB2_MBOX_CMD_STRING(hdr) \
> +       dlb2_mbox_cmd_type_strings[DLB2_MBOX_CMD_TYPE(hdr)]
> +
> +enum dlb2_mbox_status_type {
> +       DLB2_MBOX_ST_SUCCESS,
> +       DLB2_MBOX_ST_INVALID_CMD_TYPE,
> +       DLB2_MBOX_ST_VERSION_MISMATCH,
> +       DLB2_MBOX_ST_INVALID_OWNER_VF,
> +};
> +
> +static const char dlb2_mbox_status_type_strings[][128] = {
> +       "DLB2_MBOX_ST_SUCCESS",
> +       "DLB2_MBOX_ST_INVALID_CMD_TYPE",
> +       "DLB2_MBOX_ST_VERSION_MISMATCH",
> +       "DLB2_MBOX_ST_INVALID_OWNER_VF",
> +};
> +
> +#define DLB2_MBOX_ST_TYPE(hdr) \
> +       (((struct dlb2_mbox_resp_hdr *)hdr)->status)
> +#define DLB2_MBOX_ST_STRING(hdr) \
> +       dlb2_mbox_status_type_strings[DLB2_MBOX_ST_TYPE(hdr)]
> +
> +/* This structure is always the first field in a request structure */
> +struct dlb2_mbox_req_hdr {
> +       u32 type;
> +};
> +
> +/* This structure is always the first field in a response structure */
> +struct dlb2_mbox_resp_hdr {
> +       u32 status;
> +};
> +
> +struct dlb2_mbox_register_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u16 min_interface_version;
> +       u16 max_interface_version;
> +};
> +
> +struct dlb2_mbox_register_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 interface_version;
> +       u8 pf_id;
> +       u8 vf_id;
> +       u8 is_auxiliary_vf;
> +       u8 primary_vf_id;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_unregister_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_unregister_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_get_num_resources_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_get_num_resources_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 error_code;
> +       u16 num_sched_domains;
> +       u16 num_ldb_queues;
> +       u16 num_ldb_ports;
> +       u16 num_cos_ldb_ports[4];
> +       u16 num_dir_ports;
> +       u32 num_atomic_inflights;
> +       u32 num_hist_list_entries;
> +       u32 max_contiguous_hist_list_entries;
> +       u16 num_ldb_credits;
> +       u16 num_dir_credits;
> +};
> +
> +struct dlb2_mbox_create_sched_domain_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 num_ldb_queues;
> +       u32 num_ldb_ports;
> +       u32 num_cos_ldb_ports[4];
> +       u32 num_dir_ports;
> +       u32 num_atomic_inflights;
> +       u32 num_hist_list_entries;
> +       u32 num_ldb_credits;
> +       u32 num_dir_credits;
> +       u8 cos_strict;
> +       u8 padding0[3];
> +       u32 padding1;
> +};
> +
> +struct dlb2_mbox_create_sched_domain_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 error_code;
> +       u32 status;
> +       u32 id;
> +};
> +
> +struct dlb2_mbox_reset_sched_domain_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 id;
> +};
> +
> +struct dlb2_mbox_reset_sched_domain_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 error_code;
> +};
> +
> +struct dlb2_mbox_create_ldb_queue_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 domain_id;
> +       u32 num_sequence_numbers;
> +       u32 num_qid_inflights;
> +       u32 num_atomic_inflights;
> +       u32 lock_id_comp_level;
> +       u32 depth_threshold;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_create_ldb_queue_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 error_code;
> +       u32 status;
> +       u32 id;
> +};
> +
> +struct dlb2_mbox_create_dir_queue_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 domain_id;
> +       u32 port_id;
> +       u32 depth_threshold;
> +};
> +
> +struct dlb2_mbox_create_dir_queue_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 error_code;
> +       u32 status;
> +       u32 id;
> +};
> +
> +struct dlb2_mbox_create_ldb_port_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 domain_id;
> +       u16 cq_depth;
> +       u16 cq_history_list_size;
> +       u8 cos_id;
> +       u8 cos_strict;
> +       u16 padding1;
> +       u64 cq_base_address;
> +};
> +
> +struct dlb2_mbox_create_ldb_port_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 error_code;
> +       u32 status;
> +       u32 id;
> +};
> +
> +struct dlb2_mbox_create_dir_port_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 domain_id;
> +       u64 cq_base_address;
> +       u16 cq_depth;
> +       u16 padding0;
> +       s32 queue_id;
> +};
> +
> +struct dlb2_mbox_create_dir_port_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 error_code;
> +       u32 status;
> +       u32 id;
> +};
> +
> +struct dlb2_mbox_enable_ldb_port_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 domain_id;
> +       u32 port_id;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_enable_ldb_port_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 error_code;
> +       u32 status;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_disable_ldb_port_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 domain_id;
> +       u32 port_id;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_disable_ldb_port_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 error_code;
> +       u32 status;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_enable_dir_port_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 domain_id;
> +       u32 port_id;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_enable_dir_port_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 error_code;
> +       u32 status;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_disable_dir_port_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 domain_id;
> +       u32 port_id;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_disable_dir_port_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 error_code;
> +       u32 status;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_ldb_port_owned_by_domain_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 domain_id;
> +       u32 port_id;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_ldb_port_owned_by_domain_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       s32 owned;
> +};
> +
> +struct dlb2_mbox_dir_port_owned_by_domain_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 domain_id;
> +       u32 port_id;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_dir_port_owned_by_domain_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       s32 owned;
> +};
> +
> +struct dlb2_mbox_map_qid_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 domain_id;
> +       u32 port_id;
> +       u32 qid;
> +       u32 priority;
> +       u32 padding0;
> +};
> +
> +struct dlb2_mbox_map_qid_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 error_code;
> +       u32 status;
> +       u32 id;
> +};
> +
> +struct dlb2_mbox_unmap_qid_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 domain_id;
> +       u32 port_id;
> +       u32 qid;
> +};
> +
> +struct dlb2_mbox_unmap_qid_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 error_code;
> +       u32 status;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_start_domain_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 domain_id;
> +};
> +
> +struct dlb2_mbox_start_domain_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 error_code;
> +       u32 status;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_enable_ldb_port_intr_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u16 port_id;
> +       u16 thresh;
> +       u16 vector;
> +       u16 owner_vf;
> +       u16 reserved[2];
> +};
> +
> +struct dlb2_mbox_enable_ldb_port_intr_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 error_code;
> +       u32 status;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_enable_dir_port_intr_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u16 port_id;
> +       u16 thresh;
> +       u16 vector;
> +       u16 owner_vf;
> +       u16 reserved[2];
> +};
> +
> +struct dlb2_mbox_enable_dir_port_intr_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 error_code;
> +       u32 status;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_arm_cq_intr_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 domain_id;
> +       u32 port_id;
> +       u32 is_ldb;
> +};
> +
> +struct dlb2_mbox_arm_cq_intr_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 error_code;
> +       u32 status;
> +       u32 padding0;
> +};
> +
> +/*
> + * The alert_id and aux_alert_data follows the format of the alerts defined in
> + * dlb2_types.h. The alert id contains an enum dlb2_domain_alert_id value, and
> + * the aux_alert_data value varies depending on the alert.
> + */
> +struct dlb2_mbox_vf_alert_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 domain_id;
> +       u32 alert_id;
> +       u32 aux_alert_data;
> +};
> +
> +enum dlb2_mbox_vf_notification_type {
> +       DLB2_MBOX_VF_NOTIFICATION_PRE_RESET,
> +       DLB2_MBOX_VF_NOTIFICATION_POST_RESET,
> +
> +       /* NUM_DLB2_MBOX_VF_NOTIFICATION_TYPES must be last */
> +       NUM_DLB2_MBOX_VF_NOTIFICATION_TYPES,
> +};
> +
> +struct dlb2_mbox_vf_notification_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 notification;
> +};
> +
> +struct dlb2_mbox_vf_in_use_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_vf_in_use_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 in_use;
> +};
> +
> +struct dlb2_mbox_get_sn_allocation_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 group_id;
> +};
> +
> +struct dlb2_mbox_get_sn_allocation_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 num;
> +};
> +
> +struct dlb2_mbox_get_ldb_queue_depth_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 domain_id;
> +       u32 queue_id;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_get_ldb_queue_depth_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 error_code;
> +       u32 status;
> +       u32 depth;
> +};
> +
> +struct dlb2_mbox_get_dir_queue_depth_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 domain_id;
> +       u32 queue_id;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_get_dir_queue_depth_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 error_code;
> +       u32 status;
> +       u32 depth;
> +};
> +
> +struct dlb2_mbox_pending_port_unmaps_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 domain_id;
> +       u32 port_id;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_pending_port_unmaps_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 error_code;
> +       u32 status;
> +       u32 num;
> +};
> +
> +struct dlb2_mbox_get_cos_bw_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 cos_id;
> +};
> +
> +struct dlb2_mbox_get_cos_bw_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 num;
> +};
> +
> +struct dlb2_mbox_get_sn_occupancy_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 group_id;
> +};
> +
> +struct dlb2_mbox_get_sn_occupancy_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 num;
> +};
> +
> +struct dlb2_mbox_query_cq_poll_mode_cmd_req {
> +       struct dlb2_mbox_req_hdr hdr;
> +       u32 padding;
> +};
> +
> +struct dlb2_mbox_query_cq_poll_mode_cmd_resp {
> +       struct dlb2_mbox_resp_hdr hdr;
> +       u32 error_code;
> +       u32 status;
> +       u32 mode;
> +};
> +
> +#endif /* __DLB2_BASE_DLB2_MBOX_H */
> diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep.h b/drivers/event/dlb2/pf/base/dlb2_osdep.h
> new file mode 100644
> index 0000000..43f2125
> --- /dev/null
> +++ b/drivers/event/dlb2/pf/base/dlb2_osdep.h
> @@ -0,0 +1,247 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2016-2020 Intel Corporation
> + */
> +
> +#ifndef __DLB2_OSDEP_H
> +#define __DLB2_OSDEP_H
> +
> +#include <string.h>
> +#include <time.h>
> +#include <unistd.h>
> +#include <pthread.h>
> +
> +#include <rte_string_fns.h>
> +#include <rte_cycles.h>
> +#include <rte_io.h>
> +#include <rte_log.h>
> +#include <rte_spinlock.h>
> +#include "../dlb2_main.h"
> +#include "dlb2_resource.h"
> +#include "../../dlb2_log.h"
> +#include "../../dlb2_user.h"
> +
> +
> +#define DLB2_PCI_REG_READ(addr)        rte_read32((void *)addr)
> +#define DLB2_PCI_REG_WRITE(reg, value) rte_write32(value, (void *)reg)
> +
> +/* Read/write register 'reg' in the CSR BAR space */
> +#define DLB2_CSR_REG_ADDR(a, reg) ((void *)((uintptr_t)(a)->csr_kva + (reg)))
> +#define DLB2_CSR_RD(hw, reg) \
> +       DLB2_PCI_REG_READ(DLB2_CSR_REG_ADDR((hw), (reg)))
> +#define DLB2_CSR_WR(hw, reg, value) \
> +       DLB2_PCI_REG_WRITE(DLB2_CSR_REG_ADDR((hw), (reg)), (value))
> +
> +/* Read/write register 'reg' in the func BAR space */
> +#define DLB2_FUNC_REG_ADDR(a, reg) ((void *)((uintptr_t)(a)->func_kva + (reg)))
> +#define DLB2_FUNC_RD(hw, reg) \
> +       DLB2_PCI_REG_READ(DLB2_FUNC_REG_ADDR((hw), (reg)))
> +#define DLB2_FUNC_WR(hw, reg, value) \
> +       DLB2_PCI_REG_WRITE(DLB2_FUNC_REG_ADDR((hw), (reg)), (value))
> +
> +/* Map to PMDs logging interface */
> +#define DLB2_ERR(dev, fmt, args...) \
> +       DLB2_LOG_ERR(fmt, ## args)
> +
> +#define DLB2_INFO(dev, fmt, args...) \
> +       DLB2_LOG_INFO(fmt, ## args)
> +
> +#define DLB2_DEBUG(dev, fmt, args...) \
> +       DLB2_LOG_DBG(fmt, ## args)
> +
> +/**
> + * os_udelay() - busy-wait for a number of microseconds
> + * @usecs: delay duration.
> + */
> +static inline void os_udelay(int usecs)
> +{
> +       rte_delay_us(usecs);
> +}
> +
> +/**
> + * os_msleep() - sleep for a number of milliseconds
> + * @usecs: delay duration.
> + */
> +static inline void os_msleep(int msecs)
> +{
> +       rte_delay_ms(msecs);
> +}
> +
> +#define DLB2_PP_BASE(__is_ldb) \
> +       ((__is_ldb) ? DLB2_LDB_PP_BASE : DLB2_DIR_PP_BASE)
> +
> +/**
> + * os_map_producer_port() - map a producer port into the caller's address space
> + * @hw: dlb2_hw handle for a particular device.
> + * @port_id: port ID
> + * @is_ldb: true for load-balanced port, false for a directed port
> + *
> + * This function maps the requested producer port memory into the caller's
> + * address space.
> + *
> + * Return:
> + * Returns the base address at which the PP memory was mapped, else NULL.
> + */
> +static inline void *os_map_producer_port(struct dlb2_hw *hw,
> +                                        u8 port_id,
> +                                        bool is_ldb)
> +{
> +       uint64_t addr;
> +       uint64_t pp_dma_base;
> +
> +       pp_dma_base = (uintptr_t)hw->func_kva + DLB2_PP_BASE(is_ldb);
> +       addr = (pp_dma_base + (PAGE_SIZE * port_id));
> +
> +       return (void *)(uintptr_t)addr;
> +}
> +
> +/**
> + * os_unmap_producer_port() - unmap a producer port
> + * @addr: mapped producer port address
> + *
> + * This function undoes os_map_producer_port() by unmapping the producer port
> + * memory from the caller's address space.
> + *
> + * Return:
> + * Returns the base address at which the PP memory was mapped, else NULL.
> + */
> +static inline void os_unmap_producer_port(struct dlb2_hw *hw, void *addr)
> +{
> +       RTE_SET_USED(hw);
> +       RTE_SET_USED(addr);
> +}
> +
> +/**
> + * os_fence_hcw() - fence an HCW to ensure it arrives at the device
> + * @hw: dlb2_hw handle for a particular device.
> + * @pp_addr: producer port address
> + */
> +static inline void os_fence_hcw(struct dlb2_hw *hw, u64 *pp_addr)
> +{
> +       RTE_SET_USED(hw);
> +
> +       /* To ensure outstanding HCWs reach the device, read the PP address. IA
> +        * memory ordering prevents reads from passing older writes, and the
> +        * mfence also ensures this.
> +        */
> +       rte_mb();
> +
> +       *(volatile u64 *)pp_addr;
> +}
> +
> +/**
> + * os_enqueue_four_hcws() - enqueue four HCWs to DLB
> + * @hw: dlb2_hw handle for a particular device.
> + * @hcw: pointer to the 64B-aligned contiguous HCW memory
> + * @addr: producer port address
> + */
> +static inline void os_enqueue_four_hcws(struct dlb2_hw *hw,
> +                                       struct dlb2_hcw *hcw,
> +                                       void *addr)
> +{
> +       struct dlb2_dev *dlb2_dev;
> +
> +       dlb2_dev = container_of(hw, struct dlb2_dev, hw);
> +
> +       dlb2_dev->enqueue_four(hcw, addr);
> +}
> +
> +/**
> + * DLB2_HW_ERR() - log an error message
> + * @dlb2: dlb2_hw handle for a particular device.
> + * @...: variable string args.
> + */
> +#define DLB2_HW_ERR(dlb2, ...) do {    \
> +       RTE_SET_USED(dlb2);             \
> +       DLB2_ERR(dlb2, __VA_ARGS__);    \
> +} while (0)
> +
> +/**
> + * DLB2_HW_DBG() - log an info message
> + * @dlb2: dlb2_hw handle for a particular device.
> + * @...: variable string args.
> + */
> +#define DLB2_HW_DBG(dlb2, ...) do {    \
> +       RTE_SET_USED(dlb2);             \
> +       DLB2_DEBUG(dlb2, __VA_ARGS__);  \
> +} while (0)
> +
> +/* The callback runs until it completes all outstanding QID->CQ
> + * map and unmap requests. To prevent deadlock, this function gives other
> + * threads a chance to grab the resource mutex and configure hardware.
> + */
> +static void *dlb2_complete_queue_map_unmap(void *__args)
> +{
> +       struct dlb2_dev *dlb2_dev = (struct dlb2_dev *)__args;
> +       int ret;
> +
> +       while (1) {
> +               rte_spinlock_lock(&dlb2_dev->resource_mutex);
> +
> +               ret = dlb2_finish_unmap_qid_procedures(&dlb2_dev->hw);
> +               ret += dlb2_finish_map_qid_procedures(&dlb2_dev->hw);
> +
> +               if (ret != 0) {
> +                       rte_spinlock_unlock(&dlb2_dev->resource_mutex);
> +                       /* Relinquish the CPU so the application can process
> +                        * its CQs, so this function doesn't deadlock.
> +                        */
> +                       sched_yield();
> +               } else {
> +                       break;
> +               }
> +       }
> +
> +       dlb2_dev->worker_launched = false;
> +
> +       rte_spinlock_unlock(&dlb2_dev->resource_mutex);
> +
> +       return NULL;
> +}
> +
> +
> +/**
> + * os_schedule_work() - launch a thread to process pending map and unmap work
> + * @hw: dlb2_hw handle for a particular device.
> + *
> + * This function launches a kernel thread that will run until all pending
> + * map and unmap procedures are complete.
> + */
> +static inline void os_schedule_work(struct dlb2_hw *hw)
> +{
> +       struct dlb2_dev *dlb2_dev;
> +       pthread_t complete_queue_map_unmap_thread;
> +       int ret;
> +
> +       dlb2_dev = container_of(hw, struct dlb2_dev, hw);
> +
> +       ret = rte_ctrl_thread_create(&complete_queue_map_unmap_thread,
> +                                    "dlb_queue_unmap_waiter",
> +                                    NULL,
> +                                    dlb2_complete_queue_map_unmap,
> +                                    dlb2_dev);
> +       if (ret)
> +               DLB2_ERR(dlb2_dev,
> +                        "Could not create queue complete map/unmap thread, err=%d\n",
> +                        ret);
> +       else
> +               dlb2_dev->worker_launched = true;
> +}
> +
> +/**
> + * os_worker_active() - query whether the map/unmap worker thread is active
> + * @hw: dlb2_hw handle for a particular device.
> + *
> + * This function returns a boolean indicating whether a thread (launched by
> + * os_schedule_work()) is active. This function is used to determine
> + * whether or not to launch a worker thread.
> + */
> +static inline bool os_worker_active(struct dlb2_hw *hw)
> +{
> +       struct dlb2_dev *dlb2_dev;
> +
> +       dlb2_dev = container_of(hw, struct dlb2_dev, hw);
> +
> +       return dlb2_dev->worker_launched;
> +}
> +
> +#endif /*  __DLB2_OSDEP_H */
> diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep_bitmap.h b/drivers/event/dlb2/pf/base/dlb2_osdep_bitmap.h
> new file mode 100644
> index 0000000..423233b
> --- /dev/null
> +++ b/drivers/event/dlb2/pf/base/dlb2_osdep_bitmap.h
> @@ -0,0 +1,440 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2016-2020 Intel Corporation
> + */
> +
> +#ifndef __DLB2_OSDEP_BITMAP_H
> +#define __DLB2_OSDEP_BITMAP_H
> +
> +#include <stdint.h>
> +#include <stdbool.h>
> +#include <stdio.h>
> +#include <unistd.h>
> +#include <rte_bitmap.h>
> +#include <rte_string_fns.h>
> +#include <rte_malloc.h>
> +#include <rte_errno.h>
> +#include "../dlb2_main.h"
> +
> +/*************************/
> +/*** Bitmap operations ***/
> +/*************************/
> +struct dlb2_bitmap {
> +       struct rte_bitmap *map;
> +       unsigned int len;
> +};
> +
> +/**
> + * dlb2_bitmap_alloc() - alloc a bitmap data structure
> + * @bitmap: pointer to dlb2_bitmap structure pointer.
> + * @len: number of entries in the bitmap.
> + *
> + * This function allocates a bitmap and initializes it with length @len. All
> + * entries are initially zero.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise.
> + *
> + * Errors:
> + * EINVAL - bitmap is NULL or len is 0.
> + * ENOMEM - could not allocate memory for the bitmap data structure.
> + */
> +static inline int dlb2_bitmap_alloc(struct dlb2_bitmap **bitmap,
> +                                   unsigned int len)
> +{
> +       struct dlb2_bitmap *bm;
> +       void *mem;
> +       uint32_t alloc_size;
> +       uint32_t nbits = (uint32_t)len;
> +
> +       if (bitmap == NULL || nbits == 0)
> +               return -EINVAL;
> +
> +       /* Allocate DLB2 bitmap control struct */
> +       bm = rte_malloc("DLB2_PF",
> +                       sizeof(struct dlb2_bitmap),
> +                       RTE_CACHE_LINE_SIZE);
> +
> +       if (bm == NULL)
> +               return -ENOMEM;
> +
> +       /* Allocate bitmap memory */
> +       alloc_size = rte_bitmap_get_memory_footprint(nbits);
> +       mem = rte_malloc("DLB2_PF_BITMAP", alloc_size, RTE_CACHE_LINE_SIZE);
> +       if (mem == NULL) {
> +               rte_free(bm);
> +               return -ENOMEM;
> +       }
> +
> +       bm->map = rte_bitmap_init(len, mem, alloc_size);
> +       if (bm->map == NULL) {
> +               rte_free(mem);
> +               rte_free(bm);
> +               return -ENOMEM;
> +       }
> +
> +       bm->len = len;
> +
> +       *bitmap = bm;
> +
> +       return 0;
> +}
> +
> +/**
> + * dlb2_bitmap_free() - free a previously allocated bitmap data structure
> + * @bitmap: pointer to dlb2_bitmap structure.
> + *
> + * This function frees a bitmap that was allocated with dlb2_bitmap_alloc().
> + */
> +static inline void dlb2_bitmap_free(struct dlb2_bitmap *bitmap)
> +{
> +       if (bitmap == NULL)
> +               return;
> +
> +       rte_free(bitmap->map);
> +       rte_free(bitmap);
> +}
> +
> +/**
> + * dlb2_bitmap_fill() - fill a bitmap with all 1s
> + * @bitmap: pointer to dlb2_bitmap structure.
> + *
> + * This function sets all bitmap values to 1.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise.
> + *
> + * Errors:
> + * EINVAL - bitmap is NULL or is uninitialized.
> + */
> +static inline int dlb2_bitmap_fill(struct dlb2_bitmap *bitmap)
> +{
> +       unsigned int i;
> +
> +       if (bitmap  == NULL || bitmap->map == NULL)
> +               return -EINVAL;
> +
> +       for (i = 0; i != bitmap->len; i++)
> +               rte_bitmap_set(bitmap->map, i);
> +
> +       return 0;
> +}
> +
> +/**
> + * dlb2_bitmap_fill() - fill a bitmap with all 0s
> + * @bitmap: pointer to dlb2_bitmap structure.
> + *
> + * This function sets all bitmap values to 0.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise.
> + *
> + * Errors:
> + * EINVAL - bitmap is NULL or is uninitialized.
> + */
> +static inline int dlb2_bitmap_zero(struct dlb2_bitmap *bitmap)
> +{
> +       if (bitmap  == NULL || bitmap->map == NULL)
> +               return -EINVAL;
> +
> +       rte_bitmap_reset(bitmap->map);
> +
> +       return 0;
> +}
> +
> +/**
> + * dlb2_bitmap_set() - set a bitmap entry
> + * @bitmap: pointer to dlb2_bitmap structure.
> + * @bit: bit index.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise.
> + *
> + * Errors:
> + * EINVAL - bitmap is NULL or is uninitialized, or bit is larger than the
> + *         bitmap length.
> + */
> +static inline int dlb2_bitmap_set(struct dlb2_bitmap *bitmap,
> +                                 unsigned int bit)
> +{
> +       if (bitmap  == NULL || bitmap->map == NULL)
> +               return -EINVAL;
> +
> +       if (bitmap->len <= bit)
> +               return -EINVAL;
> +
> +       rte_bitmap_set(bitmap->map, bit);
> +
> +       return 0;
> +}
> +
> +/**
> + * dlb2_bitmap_set_range() - set a range of bitmap entries
> + * @bitmap: pointer to dlb2_bitmap structure.
> + * @bit: starting bit index.
> + * @len: length of the range.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise.
> + *
> + * Errors:
> + * EINVAL - bitmap is NULL or is uninitialized, or the range exceeds the bitmap
> + *         length.
> + */
> +static inline int dlb2_bitmap_set_range(struct dlb2_bitmap *bitmap,
> +                                       unsigned int bit,
> +                                       unsigned int len)
> +{
> +       unsigned int i;
> +
> +       if (bitmap  == NULL || bitmap->map == NULL)
> +               return -EINVAL;
> +
> +       if (bitmap->len <= bit)
> +               return -EINVAL;
> +
> +       for (i = 0; i != len; i++)
> +               rte_bitmap_set(bitmap->map, bit + i);
> +
> +       return 0;
> +}
> +
> +/**
> + * dlb2_bitmap_clear() - clear a bitmap entry
> + * @bitmap: pointer to dlb2_bitmap structure.
> + * @bit: bit index.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise.
> + *
> + * Errors:
> + * EINVAL - bitmap is NULL or is uninitialized, or bit is larger than the
> + *         bitmap length.
> + */
> +static inline int dlb2_bitmap_clear(struct dlb2_bitmap *bitmap,
> +                                   unsigned int bit)
> +{
> +       if (bitmap  == NULL || bitmap->map == NULL)
> +               return -EINVAL;
> +
> +       if (bitmap->len <= bit)
> +               return -EINVAL;
> +
> +       rte_bitmap_clear(bitmap->map, bit);
> +
> +       return 0;
> +}
> +
> +/**
> + * dlb2_bitmap_clear_range() - clear a range of bitmap entries
> + * @bitmap: pointer to dlb2_bitmap structure.
> + * @bit: starting bit index.
> + * @len: length of the range.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise.
> + *
> + * Errors:
> + * EINVAL - bitmap is NULL or is uninitialized, or the range exceeds the bitmap
> + *         length.
> + */
> +static inline int dlb2_bitmap_clear_range(struct dlb2_bitmap *bitmap,
> +                                         unsigned int bit,
> +                                         unsigned int len)
> +{
> +       unsigned int i;
> +
> +       if (bitmap  == NULL || bitmap->map == NULL)
> +               return -EINVAL;
> +
> +       if (bitmap->len <= bit)
> +               return -EINVAL;
> +
> +       for (i = 0; i != len; i++)
> +               rte_bitmap_clear(bitmap->map, bit + i);
> +
> +       return 0;
> +}
> +
> +/**
> + * dlb2_bitmap_find_set_bit_range() - find an range of set bits
> + * @bitmap: pointer to dlb2_bitmap structure.
> + * @len: length of the range.
> + *
> + * This function looks for a range of set bits of length @len.
> + *
> + * Return:
> + * Returns the base bit index upon success, < 0 otherwise.
> + *
> + * Errors:
> + * ENOENT - unable to find a length *len* range of set bits.
> + * EINVAL - bitmap is NULL or is uninitialized, or len is invalid.
> + */
> +static inline int dlb2_bitmap_find_set_bit_range(struct dlb2_bitmap *bitmap,
> +                                                unsigned int len)
> +{
> +       unsigned int i, j = 0;
> +
> +       if (bitmap  == NULL || bitmap->map == NULL || len == 0)
> +               return -EINVAL;
> +
> +       if (bitmap->len < len)
> +               return -ENOENT;
> +
> +       for (i = 0; i != bitmap->len; i++) {
> +               if  (rte_bitmap_get(bitmap->map, i)) {
> +                       if (++j == len)
> +                               return i - j + 1;
> +               } else {
> +                       j = 0;
> +               }
> +       }
> +
> +       /* No set bit range of length len? */
> +       return -ENOENT;
> +}
> +
> +/**
> + * dlb2_bitmap_find_set_bit() - find an range of set bits
> + * @bitmap: pointer to dlb2_bitmap structure.
> + *
> + * This function looks for a single set bit.
> + *
> + * Return:
> + * Returns the base bit index upon success, -1 if not found, <-1 otherwise.
> + *
> + * Errors:
> + * EINVAL - bitmap is NULL or is uninitialized, or len is invalid.
> + */
> +static inline int dlb2_bitmap_find_set_bit(struct dlb2_bitmap *bitmap)
> +{
> +       unsigned int i;
> +
> +       if (bitmap == NULL)
> +               return -EINVAL;
> +
> +       if (bitmap->map == NULL)
> +               return -EINVAL;
> +
> +       for (i = 0; i != bitmap->len; i++) {
> +               if  (rte_bitmap_get(bitmap->map, i))
> +                       return i;
> +       }
> +
> +       return -ENOENT;
> +}
> +
> +/**
> + * dlb2_bitmap_count() - returns the number of set bits
> + * @bitmap: pointer to dlb2_bitmap structure.
> + *
> + * This function looks for a single set bit.
> + *
> + * Return:
> + * Returns the number of set bits upon success, <0 otherwise.
> + *
> + * Errors:
> + * EINVAL - bitmap is NULL or is uninitialized.
> + */
> +static inline int dlb2_bitmap_count(struct dlb2_bitmap *bitmap)
> +{
> +       int weight = 0;
> +       unsigned int i;
> +
> +       if (bitmap == NULL)
> +               return -EINVAL;
> +
> +       if (bitmap->map == NULL)
> +               return -EINVAL;
> +
> +       for (i = 0; i != bitmap->len; i++) {
> +               if  (rte_bitmap_get(bitmap->map, i))
> +                       weight++;
> +       }
> +       return weight;
> +}
> +
> +/**
> + * dlb2_bitmap_longest_set_range() - returns longest contiguous range of set
> + *                                   bits
> + * @bitmap: pointer to dlb2_bitmap structure.
> + *
> + * Return:
> + * Returns the bitmap's longest contiguous range of of set bits upon success,
> + * <0 otherwise.
> + *
> + * Errors:
> + * EINVAL - bitmap is NULL or is uninitialized.
> + */
> +static inline int dlb2_bitmap_longest_set_range(struct dlb2_bitmap *bitmap)
> +{
> +       int max_len = 0, len = 0;
> +       unsigned int i;
> +
> +       if (bitmap == NULL)
> +               return -EINVAL;
> +
> +       if (bitmap->map == NULL)
> +               return -EINVAL;
> +
> +       for (i = 0; i != bitmap->len; i++) {
> +               if  (rte_bitmap_get(bitmap->map, i)) {
> +                       len++;
> +               } else {
> +                       if (len > max_len)
> +                               max_len = len;
> +                       len = 0;
> +               }
> +       }
> +
> +       if (len > max_len)
> +               max_len = len;
> +
> +       return max_len;
> +}
> +
> +/**
> + * dlb2_bitmap_or() - store the logical 'or' of two bitmaps into a third
> + * @dest: pointer to dlb2_bitmap structure, which will contain the results of
> + *       the 'or' of src1 and src2.
> + * @src1: pointer to dlb2_bitmap structure, will be 'or'ed with src2.
> + * @src2: pointer to dlb2_bitmap structure, will be 'or'ed with src1.
> + *
> + * This function 'or's two bitmaps together and stores the result in a third
> + * bitmap. The source and destination bitmaps can be the same.
> + *
> + * Return:
> + * Returns the number of set bits upon success, <0 otherwise.
> + *
> + * Errors:
> + * EINVAL - One of the bitmaps is NULL or is uninitialized.
> + */
> +static inline int dlb2_bitmap_or(struct dlb2_bitmap *dest,
> +                                struct dlb2_bitmap *src1,
> +                                struct dlb2_bitmap *src2)
> +{
> +       unsigned int i, min;
> +       int numset = 0;
> +
> +       if (dest  == NULL || dest->map == NULL ||
> +           src1  == NULL || src1->map == NULL ||
> +           src2  == NULL || src2->map == NULL)
> +               return -EINVAL;
> +
> +       min = dest->len;
> +       min = (min > src1->len) ? src1->len : min;
> +       min = (min > src2->len) ? src2->len : min;
> +
> +       for (i = 0; i != min; i++) {
> +               if  (rte_bitmap_get(src1->map, i) ||
> +                    rte_bitmap_get(src2->map, i)) {
> +                       rte_bitmap_set(dest->map, i);
> +                       numset++;
> +               } else {
> +                       rte_bitmap_clear(dest->map, i);
> +               }
> +       }
> +
> +       return numset;
> +}
> +
> +#endif /*  __DLB2_OSDEP_BITMAP_H */
> diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep_list.h b/drivers/event/dlb2/pf/base/dlb2_osdep_list.h
> new file mode 100644
> index 0000000..5531739
> --- /dev/null
> +++ b/drivers/event/dlb2/pf/base/dlb2_osdep_list.h
> @@ -0,0 +1,131 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2016-2020 Intel Corporation
> + */
> +
> +#ifndef __DLB2_OSDEP_LIST_H
> +#define __DLB2_OSDEP_LIST_H
> +
> +#include <rte_tailq.h>
> +
> +struct dlb2_list_entry {
> +       TAILQ_ENTRY(dlb2_list_entry) node;
> +};
> +
> +/* Dummy - just a struct definition */
> +TAILQ_HEAD(dlb2_list_head, dlb2_list_entry);
> +
> +/* =================
> + * TAILQ Supplements
> + * =================
> + */
> +
> +#ifndef TAILQ_FOREACH_ENTRY
> +#define TAILQ_FOREACH_ENTRY(ptr, head, name, iter)             \
> +       for ((iter) = TAILQ_FIRST(&head);                       \
> +           (iter)                                              \
> +               && (ptr = container_of(iter, typeof(*(ptr)), name)); \
> +           (iter) = TAILQ_NEXT((iter), node))
> +#endif
> +
> +#ifndef TAILQ_FOREACH_ENTRY_SAFE
> +#define TAILQ_FOREACH_ENTRY_SAFE(ptr, head, name, iter, tvar)  \
> +       for ((iter) = TAILQ_FIRST(&head);                       \
> +           (iter) &&                                           \
> +               (ptr = container_of(iter, typeof(*(ptr)), name)) &&\
> +               ((tvar) = TAILQ_NEXT((iter), node), 1); \
> +           (iter) = (tvar))
> +#endif
> +
> +/***********************/
> +/*** List operations ***/
> +/***********************/
> +
> +/**
> + * dlb2_list_init_head() - initialize the head of a list
> + * @head: list head
> + */
> +static inline void dlb2_list_init_head(struct dlb2_list_head *head)
> +{
> +       TAILQ_INIT(head);
> +}
> +
> +/**
> + * dlb2_list_add() - add an entry to a list
> + * @head: list head
> + * @entry: new list entry
> + */
> +static inline void
> +dlb2_list_add(struct dlb2_list_head *head, struct dlb2_list_entry *entry)
> +{
> +       TAILQ_INSERT_TAIL(head, entry, node);
> +}
> +
> +/**
> + * dlb2_list_del() - delete an entry from a list
> + * @entry: list entry
> + * @head: list head
> + */
> +static inline void dlb2_list_del(struct dlb2_list_head *head,
> +                                struct dlb2_list_entry *entry)
> +{
> +       TAILQ_REMOVE(head, entry, node);
> +}
> +
> +/**
> + * dlb2_list_empty() - check if a list is empty
> + * @head: list head
> + *
> + * Return:
> + * Returns 1 if empty, 0 if not.
> + */
> +static inline int dlb2_list_empty(struct dlb2_list_head *head)
> +{
> +       return TAILQ_EMPTY(head);
> +}
> +
> +/**
> + * dlb2_list_splice() - splice a list
> + * @src_head: list to be added
> + * @ head: where src_head will be inserted
> + */
> +static inline void dlb2_list_splice(struct dlb2_list_head *src_head,
> +                                   struct dlb2_list_head *head)
> +{
> +       TAILQ_CONCAT(head, src_head, node);
> +}
> +
> +/**
> + * DLB2_LIST_HEAD() - retrieve the head of the list
> + * @head: list head
> + * @type: type of the list variable
> + * @name: name of the list field within the containing struct
> + */
> +#define DLB2_LIST_HEAD(head, type, name)                       \
> +       (TAILQ_FIRST(&head) ?                                   \
> +               container_of(TAILQ_FIRST(&head), type, name) :  \
> +               NULL)
> +
> +/**
> + * DLB2_LIST_FOR_EACH() - iterate over a list
> + * @head: list head
> + * @ptr: pointer to struct containing a struct list
> + * @name: name of the list field within the containing struct
> + * @iter: iterator variable
> + */
> +#define DLB2_LIST_FOR_EACH(head, ptr, name, tmp_iter) \
> +       TAILQ_FOREACH_ENTRY(ptr, head, name, tmp_iter)
> +
> +/**
> + * DLB2_LIST_FOR_EACH_SAFE() - iterate over a list. This loop works even if
> + * an element is removed from the list while processing it.
> + * @ptr: pointer to struct containing a struct list
> + * @ptr_tmp: pointer to struct containing a struct list (temporary)
> + * @head: list head
> + * @name: name of the list field within the containing struct
> + * @iter: iterator variable
> + * @iter_tmp: iterator variable (temporary)
> + */
> +#define DLB2_LIST_FOR_EACH_SAFE(head, ptr, ptr_tmp, name, tmp_iter, saf_itr) \
> +       TAILQ_FOREACH_ENTRY_SAFE(ptr, head, name, tmp_iter, saf_itr)
> +
> +#endif /*  __DLB2_OSDEP_LIST_H */
> diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep_types.h b/drivers/event/dlb2/pf/base/dlb2_osdep_types.h
> new file mode 100644
> index 0000000..0a48f7e
> --- /dev/null
> +++ b/drivers/event/dlb2/pf/base/dlb2_osdep_types.h
> @@ -0,0 +1,31 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2016-2020 Intel Corporation
> + */
> +
> +#ifndef __DLB2_OSDEP_TYPES_H
> +#define __DLB2_OSDEP_TYPES_H
> +
> +#include <linux/types.h>
> +
> +#include <inttypes.h>
> +#include <ctype.h>
> +#include <stdint.h>
> +#include <stdbool.h>
> +#include <string.h>
> +#include <unistd.h>
> +#include <errno.h>
> +
> +/* Types for user mode PF PMD */
> +typedef uint8_t         u8;
> +typedef int8_t          s8;
> +typedef uint16_t        u16;
> +typedef int16_t         s16;
> +typedef uint32_t        u32;
> +typedef int32_t         s32;
> +typedef uint64_t        u64;
> +
> +#define __iomem
> +
> +/* END types for user mode PF PMD */
> +
> +#endif /* __DLB2_OSDEP_TYPES_H */
> diff --git a/drivers/event/dlb2/pf/base/dlb2_regs.h b/drivers/event/dlb2/pf/base/dlb2_regs.h
> new file mode 100644
> index 0000000..43ecad4
> --- /dev/null
> +++ b/drivers/event/dlb2/pf/base/dlb2_regs.h
> @@ -0,0 +1,2527 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2016-2020 Intel Corporation
> + */
> +
> +#ifndef __DLB2_REGS_H
> +#define __DLB2_REGS_H
> +
> +#include "dlb2_osdep_types.h"
> +
> +#define DLB2_FUNC_PF_VF2PF_MAILBOX_BYTES 256
> +#define DLB2_FUNC_PF_VF2PF_MAILBOX(vf_id, x) \
> +       (0x1000 + 0x4 * (x) + (vf_id) * 0x10000)
> +#define DLB2_FUNC_PF_VF2PF_MAILBOX_RST 0x0
> +union dlb2_func_pf_vf2pf_mailbox {
> +       struct {
> +               u32 msg : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_FUNC_PF_VF2PF_MAILBOX_ISR(vf_id) \
> +       (0x1f00 + (vf_id) * 0x10000)
> +#define DLB2_FUNC_PF_VF2PF_MAILBOX_ISR_RST 0x0
> +union dlb2_func_pf_vf2pf_mailbox_isr {
> +       struct {
> +               u32 vf0_isr : 1;
> +               u32 vf1_isr : 1;
> +               u32 vf2_isr : 1;
> +               u32 vf3_isr : 1;
> +               u32 vf4_isr : 1;
> +               u32 vf5_isr : 1;
> +               u32 vf6_isr : 1;
> +               u32 vf7_isr : 1;
> +               u32 vf8_isr : 1;
> +               u32 vf9_isr : 1;
> +               u32 vf10_isr : 1;
> +               u32 vf11_isr : 1;
> +               u32 vf12_isr : 1;
> +               u32 vf13_isr : 1;
> +               u32 vf14_isr : 1;
> +               u32 vf15_isr : 1;
> +               u32 rsvd0 : 16;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_FUNC_PF_VF2PF_FLR_ISR(vf_id) \
> +       (0x1f04 + (vf_id) * 0x10000)
> +#define DLB2_FUNC_PF_VF2PF_FLR_ISR_RST 0x0
> +union dlb2_func_pf_vf2pf_flr_isr {
> +       struct {
> +               u32 vf0_isr : 1;
> +               u32 vf1_isr : 1;
> +               u32 vf2_isr : 1;
> +               u32 vf3_isr : 1;
> +               u32 vf4_isr : 1;
> +               u32 vf5_isr : 1;
> +               u32 vf6_isr : 1;
> +               u32 vf7_isr : 1;
> +               u32 vf8_isr : 1;
> +               u32 vf9_isr : 1;
> +               u32 vf10_isr : 1;
> +               u32 vf11_isr : 1;
> +               u32 vf12_isr : 1;
> +               u32 vf13_isr : 1;
> +               u32 vf14_isr : 1;
> +               u32 vf15_isr : 1;
> +               u32 rsvd0 : 16;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_FUNC_PF_VF2PF_ISR_PEND(vf_id) \
> +       (0x1f10 + (vf_id) * 0x10000)
> +#define DLB2_FUNC_PF_VF2PF_ISR_PEND_RST 0x0
> +union dlb2_func_pf_vf2pf_isr_pend {
> +       struct {
> +               u32 isr_pend : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_FUNC_PF_PF2VF_MAILBOX_BYTES 64
> +#define DLB2_FUNC_PF_PF2VF_MAILBOX(vf_id, x) \
> +       (0x2000 + 0x4 * (x) + (vf_id) * 0x10000)
> +#define DLB2_FUNC_PF_PF2VF_MAILBOX_RST 0x0
> +union dlb2_func_pf_pf2vf_mailbox {
> +       struct {
> +               u32 msg : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_FUNC_PF_PF2VF_MAILBOX_ISR(vf_id) \
> +       (0x2f00 + (vf_id) * 0x10000)
> +#define DLB2_FUNC_PF_PF2VF_MAILBOX_ISR_RST 0x0
> +union dlb2_func_pf_pf2vf_mailbox_isr {
> +       struct {
> +               u32 vf0_isr : 1;
> +               u32 vf1_isr : 1;
> +               u32 vf2_isr : 1;
> +               u32 vf3_isr : 1;
> +               u32 vf4_isr : 1;
> +               u32 vf5_isr : 1;
> +               u32 vf6_isr : 1;
> +               u32 vf7_isr : 1;
> +               u32 vf8_isr : 1;
> +               u32 vf9_isr : 1;
> +               u32 vf10_isr : 1;
> +               u32 vf11_isr : 1;
> +               u32 vf12_isr : 1;
> +               u32 vf13_isr : 1;
> +               u32 vf14_isr : 1;
> +               u32 vf15_isr : 1;
> +               u32 rsvd0 : 16;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_FUNC_PF_VF_RESET_IN_PROGRESS(vf_id) \
> +       (0x3000 + (vf_id) * 0x10000)
> +#define DLB2_FUNC_PF_VF_RESET_IN_PROGRESS_RST 0xffff
> +union dlb2_func_pf_vf_reset_in_progress {
> +       struct {
> +               u32 vf0_reset_in_progress : 1;
> +               u32 vf1_reset_in_progress : 1;
> +               u32 vf2_reset_in_progress : 1;
> +               u32 vf3_reset_in_progress : 1;
> +               u32 vf4_reset_in_progress : 1;
> +               u32 vf5_reset_in_progress : 1;
> +               u32 vf6_reset_in_progress : 1;
> +               u32 vf7_reset_in_progress : 1;
> +               u32 vf8_reset_in_progress : 1;
> +               u32 vf9_reset_in_progress : 1;
> +               u32 vf10_reset_in_progress : 1;
> +               u32 vf11_reset_in_progress : 1;
> +               u32 vf12_reset_in_progress : 1;
> +               u32 vf13_reset_in_progress : 1;
> +               u32 vf14_reset_in_progress : 1;
> +               u32 vf15_reset_in_progress : 1;
> +               u32 rsvd0 : 16;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_MSIX_MEM_VECTOR_CTRL(x) \
> +       (0x100000c + (x) * 0x10)
> +#define DLB2_MSIX_MEM_VECTOR_CTRL_RST 0x1
> +union dlb2_msix_mem_vector_ctrl {
> +       struct {
> +               u32 vec_mask : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_IOSF_FUNC_VF_BAR_DSBL(x) \
> +       (0x20 + (x) * 0x4)
> +#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RST 0x0
> +union dlb2_iosf_func_vf_bar_dsbl {
> +       struct {
> +               u32 func_vf_bar_dis : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_TOTAL_VAS 0x1000011c
> +#define DLB2_SYS_TOTAL_VAS_RST 0x20
> +union dlb2_sys_total_vas {
> +       struct {
> +               u32 total_vas : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_TOTAL_DIR_PORTS 0x10000118
> +#define DLB2_SYS_TOTAL_DIR_PORTS_RST 0x40
> +union dlb2_sys_total_dir_ports {
> +       struct {
> +               u32 total_dir_ports : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_TOTAL_LDB_PORTS 0x10000114
> +#define DLB2_SYS_TOTAL_LDB_PORTS_RST 0x40
> +union dlb2_sys_total_ldb_ports {
> +       struct {
> +               u32 total_ldb_ports : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_TOTAL_DIR_QID 0x10000110
> +#define DLB2_SYS_TOTAL_DIR_QID_RST 0x40
> +union dlb2_sys_total_dir_qid {
> +       struct {
> +               u32 total_dir_qid : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_TOTAL_LDB_QID 0x1000010c
> +#define DLB2_SYS_TOTAL_LDB_QID_RST 0x20
> +union dlb2_sys_total_ldb_qid {
> +       struct {
> +               u32 total_ldb_qid : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_TOTAL_DIR_CRDS 0x10000108
> +#define DLB2_SYS_TOTAL_DIR_CRDS_RST 0x1000
> +union dlb2_sys_total_dir_crds {
> +       struct {
> +               u32 total_dir_credits : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_TOTAL_LDB_CRDS 0x10000104
> +#define DLB2_SYS_TOTAL_LDB_CRDS_RST 0x2000
> +union dlb2_sys_total_ldb_crds {
> +       struct {
> +               u32 total_ldb_credits : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_ALARM_PF_SYND2 0x10000508
> +#define DLB2_SYS_ALARM_PF_SYND2_RST 0x0
> +union dlb2_sys_alarm_pf_synd2 {
> +       struct {
> +               u32 lock_id : 16;
> +               u32 meas : 1;
> +               u32 debug : 7;
> +               u32 cq_pop : 1;
> +               u32 qe_uhl : 1;
> +               u32 qe_orsp : 1;
> +               u32 qe_valid : 1;
> +               u32 cq_int_rearm : 1;
> +               u32 dsi_error : 1;
> +               u32 rsvd0 : 2;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_ALARM_PF_SYND1 0x10000504
> +#define DLB2_SYS_ALARM_PF_SYND1_RST 0x0
> +union dlb2_sys_alarm_pf_synd1 {
> +       struct {
> +               u32 dsi : 16;
> +               u32 qid : 8;
> +               u32 qtype : 2;
> +               u32 qpri : 3;
> +               u32 msg_type : 3;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_ALARM_PF_SYND0 0x10000500
> +#define DLB2_SYS_ALARM_PF_SYND0_RST 0x0
> +union dlb2_sys_alarm_pf_synd0 {
> +       struct {
> +               u32 syndrome : 8;
> +               u32 rtype : 2;
> +               u32 rsvd0 : 3;
> +               u32 is_ldb : 1;
> +               u32 cls : 2;
> +               u32 aid : 6;
> +               u32 unit : 4;
> +               u32 source : 4;
> +               u32 more : 1;
> +               u32 valid : 1;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_VF_LDB_VPP_V(x) \
> +       (0x10000f00 + (x) * 0x1000)
> +#define DLB2_SYS_VF_LDB_VPP_V_RST 0x0
> +union dlb2_sys_vf_ldb_vpp_v {
> +       struct {
> +               u32 vpp_v : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_VF_LDB_VPP2PP(x) \
> +       (0x10000f04 + (x) * 0x1000)
> +#define DLB2_SYS_VF_LDB_VPP2PP_RST 0x0
> +union dlb2_sys_vf_ldb_vpp2pp {
> +       struct {
> +               u32 pp : 6;
> +               u32 rsvd0 : 26;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_VF_DIR_VPP_V(x) \
> +       (0x10000f08 + (x) * 0x1000)
> +#define DLB2_SYS_VF_DIR_VPP_V_RST 0x0
> +union dlb2_sys_vf_dir_vpp_v {
> +       struct {
> +               u32 vpp_v : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_VF_DIR_VPP2PP(x) \
> +       (0x10000f0c + (x) * 0x1000)
> +#define DLB2_SYS_VF_DIR_VPP2PP_RST 0x0
> +union dlb2_sys_vf_dir_vpp2pp {
> +       struct {
> +               u32 pp : 6;
> +               u32 rsvd0 : 26;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_VF_LDB_VQID_V(x) \
> +       (0x10000f10 + (x) * 0x1000)
> +#define DLB2_SYS_VF_LDB_VQID_V_RST 0x0
> +union dlb2_sys_vf_ldb_vqid_v {
> +       struct {
> +               u32 vqid_v : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_VF_LDB_VQID2QID(x) \
> +       (0x10000f14 + (x) * 0x1000)
> +#define DLB2_SYS_VF_LDB_VQID2QID_RST 0x0
> +union dlb2_sys_vf_ldb_vqid2qid {
> +       struct {
> +               u32 qid : 5;
> +               u32 rsvd0 : 27;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_LDB_QID2VQID(x) \
> +       (0x10000f18 + (x) * 0x1000)
> +#define DLB2_SYS_LDB_QID2VQID_RST 0x0
> +union dlb2_sys_ldb_qid2vqid {
> +       struct {
> +               u32 vqid : 5;
> +               u32 rsvd0 : 27;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_VF_DIR_VQID_V(x) \
> +       (0x10000f1c + (x) * 0x1000)
> +#define DLB2_SYS_VF_DIR_VQID_V_RST 0x0
> +union dlb2_sys_vf_dir_vqid_v {
> +       struct {
> +               u32 vqid_v : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_VF_DIR_VQID2QID(x) \
> +       (0x10000f20 + (x) * 0x1000)
> +#define DLB2_SYS_VF_DIR_VQID2QID_RST 0x0
> +union dlb2_sys_vf_dir_vqid2qid {
> +       struct {
> +               u32 qid : 6;
> +               u32 rsvd0 : 26;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_LDB_VASQID_V(x) \
> +       (0x10000f24 + (x) * 0x1000)
> +#define DLB2_SYS_LDB_VASQID_V_RST 0x0
> +union dlb2_sys_ldb_vasqid_v {
> +       struct {
> +               u32 vasqid_v : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_DIR_VASQID_V(x) \
> +       (0x10000f28 + (x) * 0x1000)
> +#define DLB2_SYS_DIR_VASQID_V_RST 0x0
> +union dlb2_sys_dir_vasqid_v {
> +       struct {
> +               u32 vasqid_v : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_ALARM_VF_SYND2(x) \
> +       (0x10000f48 + (x) * 0x1000)
> +#define DLB2_SYS_ALARM_VF_SYND2_RST 0x0
> +union dlb2_sys_alarm_vf_synd2 {
> +       struct {
> +               u32 lock_id : 16;
> +               u32 debug : 8;
> +               u32 cq_pop : 1;
> +               u32 qe_uhl : 1;
> +               u32 qe_orsp : 1;
> +               u32 qe_valid : 1;
> +               u32 isz : 1;
> +               u32 dsi_error : 1;
> +               u32 dlbrsvd : 2;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_ALARM_VF_SYND1(x) \
> +       (0x10000f44 + (x) * 0x1000)
> +#define DLB2_SYS_ALARM_VF_SYND1_RST 0x0
> +union dlb2_sys_alarm_vf_synd1 {
> +       struct {
> +               u32 dsi : 16;
> +               u32 qid : 8;
> +               u32 qtype : 2;
> +               u32 qpri : 3;
> +               u32 msg_type : 3;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_ALARM_VF_SYND0(x) \
> +       (0x10000f40 + (x) * 0x1000)
> +#define DLB2_SYS_ALARM_VF_SYND0_RST 0x0
> +union dlb2_sys_alarm_vf_synd0 {
> +       struct {
> +               u32 syndrome : 8;
> +               u32 rtype : 2;
> +               u32 vf_synd0_parity : 1;
> +               u32 vf_synd1_parity : 1;
> +               u32 vf_synd2_parity : 1;
> +               u32 is_ldb : 1;
> +               u32 cls : 2;
> +               u32 aid : 6;
> +               u32 unit : 4;
> +               u32 source : 4;
> +               u32 more : 1;
> +               u32 valid : 1;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_LDB_QID_CFG_V(x) \
> +       (0x10000f58 + (x) * 0x1000)
> +#define DLB2_SYS_LDB_QID_CFG_V_RST 0x0
> +union dlb2_sys_ldb_qid_cfg_v {
> +       struct {
> +               u32 sn_cfg_v : 1;
> +               u32 fid_cfg_v : 1;
> +               u32 rsvd0 : 30;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_LDB_QID_ITS(x) \
> +       (0x10000f54 + (x) * 0x1000)
> +#define DLB2_SYS_LDB_QID_ITS_RST 0x0
> +union dlb2_sys_ldb_qid_its {
> +       struct {
> +               u32 qid_its : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_LDB_QID_V(x) \
> +       (0x10000f50 + (x) * 0x1000)
> +#define DLB2_SYS_LDB_QID_V_RST 0x0
> +union dlb2_sys_ldb_qid_v {
> +       struct {
> +               u32 qid_v : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_DIR_QID_ITS(x) \
> +       (0x10000f64 + (x) * 0x1000)
> +#define DLB2_SYS_DIR_QID_ITS_RST 0x0
> +union dlb2_sys_dir_qid_its {
> +       struct {
> +               u32 qid_its : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_DIR_QID_V(x) \
> +       (0x10000f60 + (x) * 0x1000)
> +#define DLB2_SYS_DIR_QID_V_RST 0x0
> +union dlb2_sys_dir_qid_v {
> +       struct {
> +               u32 qid_v : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_LDB_CQ_AI_DATA(x) \
> +       (0x10000fa8 + (x) * 0x1000)
> +#define DLB2_SYS_LDB_CQ_AI_DATA_RST 0x0
> +union dlb2_sys_ldb_cq_ai_data {
> +       struct {
> +               u32 cq_ai_data : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_LDB_CQ_AI_ADDR(x) \
> +       (0x10000fa4 + (x) * 0x1000)
> +#define DLB2_SYS_LDB_CQ_AI_ADDR_RST 0x0
> +union dlb2_sys_ldb_cq_ai_addr {
> +       struct {
> +               u32 rsvd1 : 2;
> +               u32 cq_ai_addr : 18;
> +               u32 rsvd0 : 12;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_LDB_CQ_PASID(x) \
> +       (0x10000fa0 + (x) * 0x1000)
> +#define DLB2_SYS_LDB_CQ_PASID_RST 0x0
> +union dlb2_sys_ldb_cq_pasid {
> +       struct {
> +               u32 pasid : 20;
> +               u32 exe_req : 1;
> +               u32 priv_req : 1;
> +               u32 fmt2 : 1;
> +               u32 rsvd0 : 9;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_LDB_CQ_AT(x) \
> +       (0x10000f9c + (x) * 0x1000)
> +#define DLB2_SYS_LDB_CQ_AT_RST 0x0
> +union dlb2_sys_ldb_cq_at {
> +       struct {
> +               u32 cq_at : 2;
> +               u32 rsvd0 : 30;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_LDB_CQ_ISR(x) \
> +       (0x10000f98 + (x) * 0x1000)
> +#define DLB2_SYS_LDB_CQ_ISR_RST 0x0
> +/* CQ Interrupt Modes */
> +#define DLB2_CQ_ISR_MODE_DIS  0
> +#define DLB2_CQ_ISR_MODE_MSI  1
> +#define DLB2_CQ_ISR_MODE_MSIX 2
> +#define DLB2_CQ_ISR_MODE_ADI  3
> +union dlb2_sys_ldb_cq_isr {
> +       struct {
> +               u32 vector : 6;
> +               u32 vf : 4;
> +               u32 en_code : 2;
> +               u32 rsvd0 : 20;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_LDB_CQ2VF_PF_RO(x) \
> +       (0x10000f94 + (x) * 0x1000)
> +#define DLB2_SYS_LDB_CQ2VF_PF_RO_RST 0x0
> +union dlb2_sys_ldb_cq2vf_pf_ro {
> +       struct {
> +               u32 vf : 4;
> +               u32 is_pf : 1;
> +               u32 ro : 1;
> +               u32 rsvd0 : 26;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_LDB_PP_V(x) \
> +       (0x10000f90 + (x) * 0x1000)
> +#define DLB2_SYS_LDB_PP_V_RST 0x0
> +union dlb2_sys_ldb_pp_v {
> +       struct {
> +               u32 pp_v : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_LDB_PP2VDEV(x) \
> +       (0x10000f8c + (x) * 0x1000)
> +#define DLB2_SYS_LDB_PP2VDEV_RST 0x0
> +union dlb2_sys_ldb_pp2vdev {
> +       struct {
> +               u32 vdev : 4;
> +               u32 rsvd0 : 28;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_LDB_PP2VAS(x) \
> +       (0x10000f88 + (x) * 0x1000)
> +#define DLB2_SYS_LDB_PP2VAS_RST 0x0
> +union dlb2_sys_ldb_pp2vas {
> +       struct {
> +               u32 vas : 5;
> +               u32 rsvd0 : 27;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_LDB_CQ_ADDR_U(x) \
> +       (0x10000f84 + (x) * 0x1000)
> +#define DLB2_SYS_LDB_CQ_ADDR_U_RST 0x0
> +union dlb2_sys_ldb_cq_addr_u {
> +       struct {
> +               u32 addr_u : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_LDB_CQ_ADDR_L(x) \
> +       (0x10000f80 + (x) * 0x1000)
> +#define DLB2_SYS_LDB_CQ_ADDR_L_RST 0x0
> +union dlb2_sys_ldb_cq_addr_l {
> +       struct {
> +               u32 rsvd0 : 6;
> +               u32 addr_l : 26;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_DIR_CQ_FMT(x) \
> +       (0x10000fec + (x) * 0x1000)
> +#define DLB2_SYS_DIR_CQ_FMT_RST 0x0
> +union dlb2_sys_dir_cq_fmt {
> +       struct {
> +               u32 keep_pf_ppid : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_DIR_CQ_AI_DATA(x) \
> +       (0x10000fe8 + (x) * 0x1000)
> +#define DLB2_SYS_DIR_CQ_AI_DATA_RST 0x0
> +union dlb2_sys_dir_cq_ai_data {
> +       struct {
> +               u32 cq_ai_data : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_DIR_CQ_AI_ADDR(x) \
> +       (0x10000fe4 + (x) * 0x1000)
> +#define DLB2_SYS_DIR_CQ_AI_ADDR_RST 0x0
> +union dlb2_sys_dir_cq_ai_addr {
> +       struct {
> +               u32 rsvd1 : 2;
> +               u32 cq_ai_addr : 18;
> +               u32 rsvd0 : 12;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_DIR_CQ_PASID(x) \
> +       (0x10000fe0 + (x) * 0x1000)
> +#define DLB2_SYS_DIR_CQ_PASID_RST 0x0
> +union dlb2_sys_dir_cq_pasid {
> +       struct {
> +               u32 pasid : 20;
> +               u32 exe_req : 1;
> +               u32 priv_req : 1;
> +               u32 fmt2 : 1;
> +               u32 rsvd0 : 9;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_DIR_CQ_AT(x) \
> +       (0x10000fdc + (x) * 0x1000)
> +#define DLB2_SYS_DIR_CQ_AT_RST 0x0
> +union dlb2_sys_dir_cq_at {
> +       struct {
> +               u32 cq_at : 2;
> +               u32 rsvd0 : 30;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_DIR_CQ_ISR(x) \
> +       (0x10000fd8 + (x) * 0x1000)
> +#define DLB2_SYS_DIR_CQ_ISR_RST 0x0
> +union dlb2_sys_dir_cq_isr {
> +       struct {
> +               u32 vector : 6;
> +               u32 vf : 4;
> +               u32 en_code : 2;
> +               u32 rsvd0 : 20;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_DIR_CQ2VF_PF_RO(x) \
> +       (0x10000fd4 + (x) * 0x1000)
> +#define DLB2_SYS_DIR_CQ2VF_PF_RO_RST 0x0
> +union dlb2_sys_dir_cq2vf_pf_ro {
> +       struct {
> +               u32 vf : 4;
> +               u32 is_pf : 1;
> +               u32 ro : 1;
> +               u32 rsvd0 : 26;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_DIR_PP_V(x) \
> +       (0x10000fd0 + (x) * 0x1000)
> +#define DLB2_SYS_DIR_PP_V_RST 0x0
> +union dlb2_sys_dir_pp_v {
> +       struct {
> +               u32 pp_v : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_DIR_PP2VDEV(x) \
> +       (0x10000fcc + (x) * 0x1000)
> +#define DLB2_SYS_DIR_PP2VDEV_RST 0x0
> +union dlb2_sys_dir_pp2vdev {
> +       struct {
> +               u32 vdev : 4;
> +               u32 rsvd0 : 28;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_DIR_PP2VAS(x) \
> +       (0x10000fc8 + (x) * 0x1000)
> +#define DLB2_SYS_DIR_PP2VAS_RST 0x0
> +union dlb2_sys_dir_pp2vas {
> +       struct {
> +               u32 vas : 5;
> +               u32 rsvd0 : 27;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_DIR_CQ_ADDR_U(x) \
> +       (0x10000fc4 + (x) * 0x1000)
> +#define DLB2_SYS_DIR_CQ_ADDR_U_RST 0x0
> +union dlb2_sys_dir_cq_addr_u {
> +       struct {
> +               u32 addr_u : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_DIR_CQ_ADDR_L(x) \
> +       (0x10000fc0 + (x) * 0x1000)
> +#define DLB2_SYS_DIR_CQ_ADDR_L_RST 0x0
> +union dlb2_sys_dir_cq_addr_l {
> +       struct {
> +               u32 rsvd0 : 6;
> +               u32 addr_l : 26;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_INGRESS_ALARM_ENBL 0x10000300
> +#define DLB2_SYS_INGRESS_ALARM_ENBL_RST 0x0
> +union dlb2_sys_ingress_alarm_enbl {
> +       struct {
> +               u32 illegal_hcw : 1;
> +               u32 illegal_pp : 1;
> +               u32 illegal_pasid : 1;
> +               u32 illegal_qid : 1;
> +               u32 disabled_qid : 1;
> +               u32 illegal_ldb_qid_cfg : 1;
> +               u32 rsvd0 : 26;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_MSIX_ACK 0x10000400
> +#define DLB2_SYS_MSIX_ACK_RST 0x0
> +union dlb2_sys_msix_ack {
> +       struct {
> +               u32 msix_0_ack : 1;
> +               u32 msix_1_ack : 1;
> +               u32 rsvd0 : 30;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_MSIX_PASSTHRU 0x10000404
> +#define DLB2_SYS_MSIX_PASSTHRU_RST 0x0
> +union dlb2_sys_msix_passthru {
> +       struct {
> +               u32 msix_0_passthru : 1;
> +               u32 msix_1_passthru : 1;
> +               u32 rsvd0 : 30;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_MSIX_MODE 0x10000408
> +#define DLB2_SYS_MSIX_MODE_RST 0x0
> +/* MSI-X Modes */
> +#define DLB2_MSIX_MODE_PACKED     0
> +#define DLB2_MSIX_MODE_COMPRESSED 1
> +union dlb2_sys_msix_mode {
> +       struct {
> +               u32 mode : 1;
> +               u32 poll_mode : 1;
> +               u32 poll_mask : 1;
> +               u32 poll_lock : 1;
> +               u32 rsvd0 : 28;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS 0x10000440
> +#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_RST 0x0
> +union dlb2_sys_dir_cq_31_0_occ_int_sts {
> +       struct {
> +               u32 cq_0_occ_int : 1;
> +               u32 cq_1_occ_int : 1;
> +               u32 cq_2_occ_int : 1;
> +               u32 cq_3_occ_int : 1;
> +               u32 cq_4_occ_int : 1;
> +               u32 cq_5_occ_int : 1;
> +               u32 cq_6_occ_int : 1;
> +               u32 cq_7_occ_int : 1;
> +               u32 cq_8_occ_int : 1;
> +               u32 cq_9_occ_int : 1;
> +               u32 cq_10_occ_int : 1;
> +               u32 cq_11_occ_int : 1;
> +               u32 cq_12_occ_int : 1;
> +               u32 cq_13_occ_int : 1;
> +               u32 cq_14_occ_int : 1;
> +               u32 cq_15_occ_int : 1;
> +               u32 cq_16_occ_int : 1;
> +               u32 cq_17_occ_int : 1;
> +               u32 cq_18_occ_int : 1;
> +               u32 cq_19_occ_int : 1;
> +               u32 cq_20_occ_int : 1;
> +               u32 cq_21_occ_int : 1;
> +               u32 cq_22_occ_int : 1;
> +               u32 cq_23_occ_int : 1;
> +               u32 cq_24_occ_int : 1;
> +               u32 cq_25_occ_int : 1;
> +               u32 cq_26_occ_int : 1;
> +               u32 cq_27_occ_int : 1;
> +               u32 cq_28_occ_int : 1;
> +               u32 cq_29_occ_int : 1;
> +               u32 cq_30_occ_int : 1;
> +               u32 cq_31_occ_int : 1;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS 0x10000444
> +#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_RST 0x0
> +union dlb2_sys_dir_cq_63_32_occ_int_sts {
> +       struct {
> +               u32 cq_32_occ_int : 1;
> +               u32 cq_33_occ_int : 1;
> +               u32 cq_34_occ_int : 1;
> +               u32 cq_35_occ_int : 1;
> +               u32 cq_36_occ_int : 1;
> +               u32 cq_37_occ_int : 1;
> +               u32 cq_38_occ_int : 1;
> +               u32 cq_39_occ_int : 1;
> +               u32 cq_40_occ_int : 1;
> +               u32 cq_41_occ_int : 1;
> +               u32 cq_42_occ_int : 1;
> +               u32 cq_43_occ_int : 1;
> +               u32 cq_44_occ_int : 1;
> +               u32 cq_45_occ_int : 1;
> +               u32 cq_46_occ_int : 1;
> +               u32 cq_47_occ_int : 1;
> +               u32 cq_48_occ_int : 1;
> +               u32 cq_49_occ_int : 1;
> +               u32 cq_50_occ_int : 1;
> +               u32 cq_51_occ_int : 1;
> +               u32 cq_52_occ_int : 1;
> +               u32 cq_53_occ_int : 1;
> +               u32 cq_54_occ_int : 1;
> +               u32 cq_55_occ_int : 1;
> +               u32 cq_56_occ_int : 1;
> +               u32 cq_57_occ_int : 1;
> +               u32 cq_58_occ_int : 1;
> +               u32 cq_59_occ_int : 1;
> +               u32 cq_60_occ_int : 1;
> +               u32 cq_61_occ_int : 1;
> +               u32 cq_62_occ_int : 1;
> +               u32 cq_63_occ_int : 1;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS 0x10000460
> +#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_RST 0x0
> +union dlb2_sys_ldb_cq_31_0_occ_int_sts {
> +       struct {
> +               u32 cq_0_occ_int : 1;
> +               u32 cq_1_occ_int : 1;
> +               u32 cq_2_occ_int : 1;
> +               u32 cq_3_occ_int : 1;
> +               u32 cq_4_occ_int : 1;
> +               u32 cq_5_occ_int : 1;
> +               u32 cq_6_occ_int : 1;
> +               u32 cq_7_occ_int : 1;
> +               u32 cq_8_occ_int : 1;
> +               u32 cq_9_occ_int : 1;
> +               u32 cq_10_occ_int : 1;
> +               u32 cq_11_occ_int : 1;
> +               u32 cq_12_occ_int : 1;
> +               u32 cq_13_occ_int : 1;
> +               u32 cq_14_occ_int : 1;
> +               u32 cq_15_occ_int : 1;
> +               u32 cq_16_occ_int : 1;
> +               u32 cq_17_occ_int : 1;
> +               u32 cq_18_occ_int : 1;
> +               u32 cq_19_occ_int : 1;
> +               u32 cq_20_occ_int : 1;
> +               u32 cq_21_occ_int : 1;
> +               u32 cq_22_occ_int : 1;
> +               u32 cq_23_occ_int : 1;
> +               u32 cq_24_occ_int : 1;
> +               u32 cq_25_occ_int : 1;
> +               u32 cq_26_occ_int : 1;
> +               u32 cq_27_occ_int : 1;
> +               u32 cq_28_occ_int : 1;
> +               u32 cq_29_occ_int : 1;
> +               u32 cq_30_occ_int : 1;
> +               u32 cq_31_occ_int : 1;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS 0x10000464
> +#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_RST 0x0
> +union dlb2_sys_ldb_cq_63_32_occ_int_sts {
> +       struct {
> +               u32 cq_32_occ_int : 1;
> +               u32 cq_33_occ_int : 1;
> +               u32 cq_34_occ_int : 1;
> +               u32 cq_35_occ_int : 1;
> +               u32 cq_36_occ_int : 1;
> +               u32 cq_37_occ_int : 1;
> +               u32 cq_38_occ_int : 1;
> +               u32 cq_39_occ_int : 1;
> +               u32 cq_40_occ_int : 1;
> +               u32 cq_41_occ_int : 1;
> +               u32 cq_42_occ_int : 1;
> +               u32 cq_43_occ_int : 1;
> +               u32 cq_44_occ_int : 1;
> +               u32 cq_45_occ_int : 1;
> +               u32 cq_46_occ_int : 1;
> +               u32 cq_47_occ_int : 1;
> +               u32 cq_48_occ_int : 1;
> +               u32 cq_49_occ_int : 1;
> +               u32 cq_50_occ_int : 1;
> +               u32 cq_51_occ_int : 1;
> +               u32 cq_52_occ_int : 1;
> +               u32 cq_53_occ_int : 1;
> +               u32 cq_54_occ_int : 1;
> +               u32 cq_55_occ_int : 1;
> +               u32 cq_56_occ_int : 1;
> +               u32 cq_57_occ_int : 1;
> +               u32 cq_58_occ_int : 1;
> +               u32 cq_59_occ_int : 1;
> +               u32 cq_60_occ_int : 1;
> +               u32 cq_61_occ_int : 1;
> +               u32 cq_62_occ_int : 1;
> +               u32 cq_63_occ_int : 1;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_DIR_CQ_OPT_CLR 0x100004c0
> +#define DLB2_SYS_DIR_CQ_OPT_CLR_RST 0x0
> +union dlb2_sys_dir_cq_opt_clr {
> +       struct {
> +               u32 cq : 6;
> +               u32 rsvd0 : 26;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_SYS_ALARM_HW_SYND 0x1000050c
> +#define DLB2_SYS_ALARM_HW_SYND_RST 0x0
> +union dlb2_sys_alarm_hw_synd {
> +       struct {
> +               u32 syndrome : 8;
> +               u32 rtype : 2;
> +               u32 alarm : 1;
> +               u32 cwd : 1;
> +               u32 vf_pf_mb : 1;
> +               u32 rsvd0 : 1;
> +               u32 cls : 2;
> +               u32 aid : 6;
> +               u32 unit : 4;
> +               u32 source : 4;
> +               u32 more : 1;
> +               u32 valid : 1;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_AQED_PIPE_QID_FID_LIM(x) \
> +       (0x20000000 + (x) * 0x1000)
> +#define DLB2_AQED_PIPE_QID_FID_LIM_RST 0x7ff
> +union dlb2_aqed_pipe_qid_fid_lim {
> +       struct {
> +               u32 qid_fid_limit : 13;
> +               u32 rsvd0 : 19;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_AQED_PIPE_QID_HID_WIDTH(x) \
> +       (0x20080000 + (x) * 0x1000)
> +#define DLB2_AQED_PIPE_QID_HID_WIDTH_RST 0x0
> +union dlb2_aqed_pipe_qid_hid_width {
> +       struct {
> +               u32 compress_code : 3;
> +               u32 rsvd0 : 29;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_AQED_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATM_0 0x24000004
> +#define DLB2_AQED_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATM_0_RST 0xfefcfaf8
> +union dlb2_aqed_pipe_cfg_arb_weights_tqpri_atm_0 {
> +       struct {
> +               u32 pri0 : 8;
> +               u32 pri1 : 8;
> +               u32 pri2 : 8;
> +               u32 pri3 : 8;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_ATM_QID2CQIDIX_00(x) \
> +       (0x30080000 + (x) * 0x1000)
> +#define DLB2_ATM_QID2CQIDIX_00_RST 0x0
> +#define DLB2_ATM_QID2CQIDIX(x, y) \
> +       (DLB2_ATM_QID2CQIDIX_00(x) + 0x80000 * (y))
> +#define DLB2_ATM_QID2CQIDIX_NUM 16
> +union dlb2_atm_qid2cqidix_00 {
> +       struct {
> +               u32 cq_p0 : 8;
> +               u32 cq_p1 : 8;
> +               u32 cq_p2 : 8;
> +               u32 cq_p3 : 8;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN 0x34000004
> +#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_RST 0xfffefdfc
> +union dlb2_atm_cfg_arb_weights_rdy_bin {
> +       struct {
> +               u32 bin0 : 8;
> +               u32 bin1 : 8;
> +               u32 bin2 : 8;
> +               u32 bin3 : 8;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN 0x34000008
> +#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_RST 0xfffefdfc
> +union dlb2_atm_cfg_arb_weights_sched_bin {
> +       struct {
> +               u32 bin0 : 8;
> +               u32 bin1 : 8;
> +               u32 bin2 : 8;
> +               u32 bin3 : 8;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_CFG_DIR_VAS_CRD(x) \
> +       (0x40000000 + (x) * 0x1000)
> +#define DLB2_CHP_CFG_DIR_VAS_CRD_RST 0x0
> +union dlb2_chp_cfg_dir_vas_crd {
> +       struct {
> +               u32 count : 14;
> +               u32 rsvd0 : 18;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_CFG_LDB_VAS_CRD(x) \
> +       (0x40080000 + (x) * 0x1000)
> +#define DLB2_CHP_CFG_LDB_VAS_CRD_RST 0x0
> +union dlb2_chp_cfg_ldb_vas_crd {
> +       struct {
> +               u32 count : 15;
> +               u32 rsvd0 : 17;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_ORD_QID_SN(x) \
> +       (0x40100000 + (x) * 0x1000)
> +#define DLB2_CHP_ORD_QID_SN_RST 0x0
> +union dlb2_chp_ord_qid_sn {
> +       struct {
> +               u32 sn : 10;
> +               u32 rsvd0 : 22;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_ORD_QID_SN_MAP(x) \
> +       (0x40180000 + (x) * 0x1000)
> +#define DLB2_CHP_ORD_QID_SN_MAP_RST 0x0
> +union dlb2_chp_ord_qid_sn_map {
> +       struct {
> +               u32 mode : 3;
> +               u32 slot : 4;
> +               u32 rsvz0 : 1;
> +               u32 grp : 1;
> +               u32 rsvz1 : 1;
> +               u32 rsvd0 : 22;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_SN_CHK_ENBL(x) \
> +       (0x40200000 + (x) * 0x1000)
> +#define DLB2_CHP_SN_CHK_ENBL_RST 0x0
> +union dlb2_chp_sn_chk_enbl {
> +       struct {
> +               u32 en : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_DIR_CQ_DEPTH(x) \
> +       (0x40280000 + (x) * 0x1000)
> +#define DLB2_CHP_DIR_CQ_DEPTH_RST 0x0
> +union dlb2_chp_dir_cq_depth {
> +       struct {
> +               u32 depth : 13;
> +               u32 rsvd0 : 19;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
> +       (0x40300000 + (x) * 0x1000)
> +#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST 0x0
> +union dlb2_chp_dir_cq_int_depth_thrsh {
> +       struct {
> +               u32 depth_threshold : 13;
> +               u32 rsvd0 : 19;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_DIR_CQ_INT_ENB(x) \
> +       (0x40380000 + (x) * 0x1000)
> +#define DLB2_CHP_DIR_CQ_INT_ENB_RST 0x0
> +union dlb2_chp_dir_cq_int_enb {
> +       struct {
> +               u32 en_tim : 1;
> +               u32 en_depth : 1;
> +               u32 rsvd0 : 30;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_DIR_CQ_TMR_THRSH(x) \
> +       (0x40480000 + (x) * 0x1000)
> +#define DLB2_CHP_DIR_CQ_TMR_THRSH_RST 0x1
> +union dlb2_chp_dir_cq_tmr_thrsh {
> +       struct {
> +               u32 thrsh_0 : 1;
> +               u32 thrsh_13_1 : 13;
> +               u32 rsvd0 : 18;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
> +       (0x40500000 + (x) * 0x1000)
> +#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST 0x0
> +union dlb2_chp_dir_cq_tkn_depth_sel {
> +       struct {
> +               u32 token_depth_select : 4;
> +               u32 rsvd0 : 28;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_DIR_CQ_WD_ENB(x) \
> +       (0x40580000 + (x) * 0x1000)
> +#define DLB2_CHP_DIR_CQ_WD_ENB_RST 0x0
> +union dlb2_chp_dir_cq_wd_enb {
> +       struct {
> +               u32 wd_enable : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_DIR_CQ_WPTR(x) \
> +       (0x40600000 + (x) * 0x1000)
> +#define DLB2_CHP_DIR_CQ_WPTR_RST 0x0
> +union dlb2_chp_dir_cq_wptr {
> +       struct {
> +               u32 write_pointer : 13;
> +               u32 rsvd0 : 19;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_DIR_CQ2VAS(x) \
> +       (0x40680000 + (x) * 0x1000)
> +#define DLB2_CHP_DIR_CQ2VAS_RST 0x0
> +union dlb2_chp_dir_cq2vas {
> +       struct {
> +               u32 cq2vas : 5;
> +               u32 rsvd0 : 27;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_HIST_LIST_BASE(x) \
> +       (0x40700000 + (x) * 0x1000)
> +#define DLB2_CHP_HIST_LIST_BASE_RST 0x0
> +union dlb2_chp_hist_list_base {
> +       struct {
> +               u32 base : 13;
> +               u32 rsvd0 : 19;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_HIST_LIST_LIM(x) \
> +       (0x40780000 + (x) * 0x1000)
> +#define DLB2_CHP_HIST_LIST_LIM_RST 0x0
> +union dlb2_chp_hist_list_lim {
> +       struct {
> +               u32 limit : 13;
> +               u32 rsvd0 : 19;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_HIST_LIST_POP_PTR(x) \
> +       (0x40800000 + (x) * 0x1000)
> +#define DLB2_CHP_HIST_LIST_POP_PTR_RST 0x0
> +union dlb2_chp_hist_list_pop_ptr {
> +       struct {
> +               u32 pop_ptr : 13;
> +               u32 generation : 1;
> +               u32 rsvd0 : 18;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_HIST_LIST_PUSH_PTR(x) \
> +       (0x40880000 + (x) * 0x1000)
> +#define DLB2_CHP_HIST_LIST_PUSH_PTR_RST 0x0
> +union dlb2_chp_hist_list_push_ptr {
> +       struct {
> +               u32 push_ptr : 13;
> +               u32 generation : 1;
> +               u32 rsvd0 : 18;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_LDB_CQ_DEPTH(x) \
> +       (0x40900000 + (x) * 0x1000)
> +#define DLB2_CHP_LDB_CQ_DEPTH_RST 0x0
> +union dlb2_chp_ldb_cq_depth {
> +       struct {
> +               u32 depth : 11;
> +               u32 rsvd0 : 21;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
> +       (0x40980000 + (x) * 0x1000)
> +#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST 0x0
> +union dlb2_chp_ldb_cq_int_depth_thrsh {
> +       struct {
> +               u32 depth_threshold : 11;
> +               u32 rsvd0 : 21;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_LDB_CQ_INT_ENB(x) \
> +       (0x40a00000 + (x) * 0x1000)
> +#define DLB2_CHP_LDB_CQ_INT_ENB_RST 0x0
> +union dlb2_chp_ldb_cq_int_enb {
> +       struct {
> +               u32 en_tim : 1;
> +               u32 en_depth : 1;
> +               u32 rsvd0 : 30;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_LDB_CQ_TMR_THRSH(x) \
> +       (0x40b00000 + (x) * 0x1000)
> +#define DLB2_CHP_LDB_CQ_TMR_THRSH_RST 0x1
> +union dlb2_chp_ldb_cq_tmr_thrsh {
> +       struct {
> +               u32 thrsh_0 : 1;
> +               u32 thrsh_13_1 : 13;
> +               u32 rsvd0 : 18;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
> +       (0x40b80000 + (x) * 0x1000)
> +#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST 0x0
> +union dlb2_chp_ldb_cq_tkn_depth_sel {
> +       struct {
> +               u32 token_depth_select : 4;
> +               u32 rsvd0 : 28;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_LDB_CQ_WD_ENB(x) \
> +       (0x40c00000 + (x) * 0x1000)
> +#define DLB2_CHP_LDB_CQ_WD_ENB_RST 0x0
> +union dlb2_chp_ldb_cq_wd_enb {
> +       struct {
> +               u32 wd_enable : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_LDB_CQ_WPTR(x) \
> +       (0x40c80000 + (x) * 0x1000)
> +#define DLB2_CHP_LDB_CQ_WPTR_RST 0x0
> +union dlb2_chp_ldb_cq_wptr {
> +       struct {
> +               u32 write_pointer : 11;
> +               u32 rsvd0 : 21;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_LDB_CQ2VAS(x) \
> +       (0x40d00000 + (x) * 0x1000)
> +#define DLB2_CHP_LDB_CQ2VAS_RST 0x0
> +union dlb2_chp_ldb_cq2vas {
> +       struct {
> +               u32 cq2vas : 5;
> +               u32 rsvd0 : 27;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_CFG_CHP_CSR_CTRL 0x44000008
> +#define DLB2_CHP_CFG_CHP_CSR_CTRL_RST 0x180002
> +union dlb2_chp_cfg_chp_csr_ctrl {
> +       struct {
> +               u32 int_cor_alarm_dis : 1;
> +               u32 int_cor_synd_dis : 1;
> +               u32 int_uncr_alarm_dis : 1;
> +               u32 int_unc_synd_dis : 1;
> +               u32 int_inf0_alarm_dis : 1;
> +               u32 int_inf0_synd_dis : 1;
> +               u32 int_inf1_alarm_dis : 1;
> +               u32 int_inf1_synd_dis : 1;
> +               u32 int_inf2_alarm_dis : 1;
> +               u32 int_inf2_synd_dis : 1;
> +               u32 int_inf3_alarm_dis : 1;
> +               u32 int_inf3_synd_dis : 1;
> +               u32 int_inf4_alarm_dis : 1;
> +               u32 int_inf4_synd_dis : 1;
> +               u32 int_inf5_alarm_dis : 1;
> +               u32 int_inf5_synd_dis : 1;
> +               u32 dlb_cor_alarm_enable : 1;
> +               u32 cfg_64bytes_qe_ldb_cq_mode : 1;
> +               u32 cfg_64bytes_qe_dir_cq_mode : 1;
> +               u32 pad_write_ldb : 1;
> +               u32 pad_write_dir : 1;
> +               u32 pad_first_write_ldb : 1;
> +               u32 pad_first_write_dir : 1;
> +               u32 rsvz0 : 9;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_DIR_CQ_INTR_ARMED0 0x4400005c
> +#define DLB2_CHP_DIR_CQ_INTR_ARMED0_RST 0x0
> +union dlb2_chp_dir_cq_intr_armed0 {
> +       struct {
> +               u32 armed : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_DIR_CQ_INTR_ARMED1 0x44000060
> +#define DLB2_CHP_DIR_CQ_INTR_ARMED1_RST 0x0
> +union dlb2_chp_dir_cq_intr_armed1 {
> +       struct {
> +               u32 armed : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL 0x44000084
> +#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RST 0x0
> +union dlb2_chp_cfg_dir_cq_timer_ctl {
> +       struct {
> +               u32 sample_interval : 8;
> +               u32 enb : 1;
> +               u32 rsvz0 : 23;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_CFG_DIR_WDTO_0 0x44000088
> +#define DLB2_CHP_CFG_DIR_WDTO_0_RST 0x0
> +union dlb2_chp_cfg_dir_wdto_0 {
> +       struct {
> +               u32 wdto : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_CFG_DIR_WDTO_1 0x4400008c
> +#define DLB2_CHP_CFG_DIR_WDTO_1_RST 0x0
> +union dlb2_chp_cfg_dir_wdto_1 {
> +       struct {
> +               u32 wdto : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_CFG_DIR_WD_DISABLE0 0x44000098
> +#define DLB2_CHP_CFG_DIR_WD_DISABLE0_RST 0xffffffff
> +union dlb2_chp_cfg_dir_wd_disable0 {
> +       struct {
> +               u32 wd_disable : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_CFG_DIR_WD_DISABLE1 0x4400009c
> +#define DLB2_CHP_CFG_DIR_WD_DISABLE1_RST 0xffffffff
> +union dlb2_chp_cfg_dir_wd_disable1 {
> +       struct {
> +               u32 wd_disable : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000a0
> +#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RST 0x0
> +union dlb2_chp_cfg_dir_wd_enb_interval {
> +       struct {
> +               u32 sample_interval : 28;
> +               u32 enb : 1;
> +               u32 rsvz0 : 3;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_CFG_DIR_WD_THRESHOLD 0x440000ac
> +#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RST 0x0
> +union dlb2_chp_cfg_dir_wd_threshold {
> +       struct {
> +               u32 wd_threshold : 8;
> +               u32 rsvz0 : 24;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_LDB_CQ_INTR_ARMED0 0x440000b0
> +#define DLB2_CHP_LDB_CQ_INTR_ARMED0_RST 0x0
> +union dlb2_chp_ldb_cq_intr_armed0 {
> +       struct {
> +               u32 armed : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_LDB_CQ_INTR_ARMED1 0x440000b4
> +#define DLB2_CHP_LDB_CQ_INTR_ARMED1_RST 0x0
> +union dlb2_chp_ldb_cq_intr_armed1 {
> +       struct {
> +               u32 armed : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL 0x440000d8
> +#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RST 0x0
> +union dlb2_chp_cfg_ldb_cq_timer_ctl {
> +       struct {
> +               u32 sample_interval : 8;
> +               u32 enb : 1;
> +               u32 rsvz0 : 23;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_CFG_LDB_WDTO_0 0x440000dc
> +#define DLB2_CHP_CFG_LDB_WDTO_0_RST 0x0
> +union dlb2_chp_cfg_ldb_wdto_0 {
> +       struct {
> +               u32 wdto : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_CFG_LDB_WDTO_1 0x440000e0
> +#define DLB2_CHP_CFG_LDB_WDTO_1_RST 0x0
> +union dlb2_chp_cfg_ldb_wdto_1 {
> +       struct {
> +               u32 wdto : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_CFG_LDB_WD_DISABLE0 0x440000ec
> +#define DLB2_CHP_CFG_LDB_WD_DISABLE0_RST 0xffffffff
> +union dlb2_chp_cfg_ldb_wd_disable0 {
> +       struct {
> +               u32 wd_disable : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_CFG_LDB_WD_DISABLE1 0x440000f0
> +#define DLB2_CHP_CFG_LDB_WD_DISABLE1_RST 0xffffffff
> +union dlb2_chp_cfg_ldb_wd_disable1 {
> +       struct {
> +               u32 wd_disable : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL 0x440000f4
> +#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RST 0x0
> +union dlb2_chp_cfg_ldb_wd_enb_interval {
> +       struct {
> +               u32 sample_interval : 28;
> +               u32 enb : 1;
> +               u32 rsvz0 : 3;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_CFG_LDB_WD_THRESHOLD 0x44000100
> +#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RST 0x0
> +union dlb2_chp_cfg_ldb_wd_threshold {
> +       struct {
> +               u32 wd_threshold : 8;
> +               u32 rsvz0 : 24;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CHP_CTRL_DIAG_02 0x4c000028
> +#define DLB2_CHP_CTRL_DIAG_02_RST 0x1555
> +union dlb2_chp_ctrl_diag_02 {
> +       struct {
> +               u32 egress_credit_status_empty : 1;
> +               u32 egress_credit_status_afull : 1;
> +               u32 chp_outbound_hcw_pipe_credit_status_empty : 1;
> +               u32 chp_outbound_hcw_pipe_credit_status_afull : 1;
> +               u32 chp_lsp_ap_cmp_pipe_credit_status_empty : 1;
> +               u32 chp_lsp_ap_cmp_pipe_credit_status_afull : 1;
> +               u32 chp_lsp_tok_pipe_credit_status_empty : 1;
> +               u32 chp_lsp_tok_pipe_credit_status_afull : 1;
> +               u32 chp_rop_pipe_credit_status_empty : 1;
> +               u32 chp_rop_pipe_credit_status_afull : 1;
> +               u32 qed_to_cq_pipe_credit_status_empty : 1;
> +               u32 qed_to_cq_pipe_credit_status_afull : 1;
> +               u32 egress_lsp_token_credit_status_empty : 1;
> +               u32 egress_lsp_token_credit_status_afull : 1;
> +               u32 rsvd0 : 18;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0 0x54000000
> +#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_RST 0xfefcfaf8
> +union dlb2_dp_cfg_arb_weights_tqpri_dir_0 {
> +       struct {
> +               u32 pri0 : 8;
> +               u32 pri1 : 8;
> +               u32 pri2 : 8;
> +               u32 pri3 : 8;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1 0x54000004
> +#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RST 0x0
> +union dlb2_dp_cfg_arb_weights_tqpri_dir_1 {
> +       struct {
> +               u32 rsvz0 : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x54000008
> +#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
> +union dlb2_dp_cfg_arb_weights_tqpri_replay_0 {
> +       struct {
> +               u32 pri0 : 8;
> +               u32 pri1 : 8;
> +               u32 pri2 : 8;
> +               u32 pri3 : 8;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x5400000c
> +#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
> +union dlb2_dp_cfg_arb_weights_tqpri_replay_1 {
> +       struct {
> +               u32 rsvz0 : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_DP_DIR_CSR_CTRL 0x54000010
> +#define DLB2_DP_DIR_CSR_CTRL_RST 0x0
> +union dlb2_dp_dir_csr_ctrl {
> +       struct {
> +               u32 int_cor_alarm_dis : 1;
> +               u32 int_cor_synd_dis : 1;
> +               u32 int_uncr_alarm_dis : 1;
> +               u32 int_unc_synd_dis : 1;
> +               u32 int_inf0_alarm_dis : 1;
> +               u32 int_inf0_synd_dis : 1;
> +               u32 int_inf1_alarm_dis : 1;
> +               u32 int_inf1_synd_dis : 1;
> +               u32 int_inf2_alarm_dis : 1;
> +               u32 int_inf2_synd_dis : 1;
> +               u32 int_inf3_alarm_dis : 1;
> +               u32 int_inf3_synd_dis : 1;
> +               u32 int_inf4_alarm_dis : 1;
> +               u32 int_inf4_synd_dis : 1;
> +               u32 int_inf5_alarm_dis : 1;
> +               u32 int_inf5_synd_dis : 1;
> +               u32 rsvz0 : 16;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x84000000
> +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_RST 0xfefcfaf8
> +union dlb2_nalb_pipe_cfg_arb_weights_tqpri_atq_0 {
> +       struct {
> +               u32 pri0 : 8;
> +               u32 pri1 : 8;
> +               u32 pri2 : 8;
> +               u32 pri3 : 8;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x84000004
> +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RST 0x0
> +union dlb2_nalb_pipe_cfg_arb_weights_tqpri_atq_1 {
> +       struct {
> +               u32 rsvz0 : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x84000008
> +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_0_RST 0xfefcfaf8
> +union dlb2_nalb_pipe_cfg_arb_weights_tqpri_nalb_0 {
> +       struct {
> +               u32 pri0 : 8;
> +               u32 pri1 : 8;
> +               u32 pri2 : 8;
> +               u32 pri3 : 8;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x8400000c
> +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RST 0x0
> +union dlb2_nalb_pipe_cfg_arb_weights_tqpri_nalb_1 {
> +       struct {
> +               u32 rsvz0 : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x84000010
> +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
> +union dlb2_nalb_pipe_cfg_arb_weights_tqpri_replay_0 {
> +       struct {
> +               u32 pri0 : 8;
> +               u32 pri1 : 8;
> +               u32 pri2 : 8;
> +               u32 pri3 : 8;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x84000014
> +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
> +union dlb2_nalb_pipe_cfg_arb_weights_tqpri_replay_1 {
> +       struct {
> +               u32 rsvz0 : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_RO_PIPE_GRP_0_SLT_SHFT(x) \
> +       (0x96000000 + (x) * 0x4)
> +#define DLB2_RO_PIPE_GRP_0_SLT_SHFT_RST 0x0
> +union dlb2_ro_pipe_grp_0_slt_shft {
> +       struct {
> +               u32 change : 10;
> +               u32 rsvd0 : 22;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_RO_PIPE_GRP_1_SLT_SHFT(x) \
> +       (0x96010000 + (x) * 0x4)
> +#define DLB2_RO_PIPE_GRP_1_SLT_SHFT_RST 0x0
> +union dlb2_ro_pipe_grp_1_slt_shft {
> +       struct {
> +               u32 change : 10;
> +               u32 rsvd0 : 22;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_RO_PIPE_GRP_SN_MODE 0x94000000
> +#define DLB2_RO_PIPE_GRP_SN_MODE_RST 0x0
> +union dlb2_ro_pipe_grp_sn_mode {
> +       struct {
> +               u32 sn_mode_0 : 3;
> +               u32 rszv0 : 5;
> +               u32 sn_mode_1 : 3;
> +               u32 rszv1 : 21;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_RO_PIPE_CFG_CTRL_GENERAL_0 0x9c000000
> +#define DLB2_RO_PIPE_CFG_CTRL_GENERAL_0_RST 0x0
> +union dlb2_ro_pipe_cfg_ctrl_general_0 {
> +       struct {
> +               u32 unit_single_step_mode : 1;
> +               u32 rr_en : 1;
> +               u32 rszv0 : 30;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_CQ2PRIOV(x) \
> +       (0xa0000000 + (x) * 0x1000)
> +#define DLB2_LSP_CQ2PRIOV_RST 0x0
> +union dlb2_lsp_cq2priov {
> +       struct {
> +               u32 prio : 24;
> +               u32 v : 8;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_CQ2QID0(x) \
> +       (0xa0080000 + (x) * 0x1000)
> +#define DLB2_LSP_CQ2QID0_RST 0x0
> +union dlb2_lsp_cq2qid0 {
> +       struct {
> +               u32 qid_p0 : 7;
> +               u32 rsvd3 : 1;
> +               u32 qid_p1 : 7;
> +               u32 rsvd2 : 1;
> +               u32 qid_p2 : 7;
> +               u32 rsvd1 : 1;
> +               u32 qid_p3 : 7;
> +               u32 rsvd0 : 1;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_CQ2QID1(x) \
> +       (0xa0100000 + (x) * 0x1000)
> +#define DLB2_LSP_CQ2QID1_RST 0x0
> +union dlb2_lsp_cq2qid1 {
> +       struct {
> +               u32 qid_p4 : 7;
> +               u32 rsvd3 : 1;
> +               u32 qid_p5 : 7;
> +               u32 rsvd2 : 1;
> +               u32 qid_p6 : 7;
> +               u32 rsvd1 : 1;
> +               u32 qid_p7 : 7;
> +               u32 rsvd0 : 1;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_CQ_DIR_DSBL(x) \
> +       (0xa0180000 + (x) * 0x1000)
> +#define DLB2_LSP_CQ_DIR_DSBL_RST 0x1
> +union dlb2_lsp_cq_dir_dsbl {
> +       struct {
> +               u32 disabled : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_CQ_DIR_TKN_CNT(x) \
> +       (0xa0200000 + (x) * 0x1000)
> +#define DLB2_LSP_CQ_DIR_TKN_CNT_RST 0x0
> +union dlb2_lsp_cq_dir_tkn_cnt {
> +       struct {
> +               u32 count : 13;
> +               u32 rsvd0 : 19;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
> +       (0xa0280000 + (x) * 0x1000)
> +#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST 0x0
> +union dlb2_lsp_cq_dir_tkn_depth_sel_dsi {
> +       struct {
> +               u32 token_depth_select : 4;
> +               u32 disable_wb_opt : 1;
> +               u32 ignore_depth : 1;
> +               u32 rsvd0 : 26;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(x) \
> +       (0xa0300000 + (x) * 0x1000)
> +#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST 0x0
> +union dlb2_lsp_cq_dir_tot_sch_cntl {
> +       struct {
> +               u32 count : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(x) \
> +       (0xa0380000 + (x) * 0x1000)
> +#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST 0x0
> +union dlb2_lsp_cq_dir_tot_sch_cnth {
> +       struct {
> +               u32 count : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_CQ_LDB_DSBL(x) \
> +       (0xa0400000 + (x) * 0x1000)
> +#define DLB2_LSP_CQ_LDB_DSBL_RST 0x1
> +union dlb2_lsp_cq_ldb_dsbl {
> +       struct {
> +               u32 disabled : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_CQ_LDB_INFL_CNT(x) \
> +       (0xa0480000 + (x) * 0x1000)
> +#define DLB2_LSP_CQ_LDB_INFL_CNT_RST 0x0
> +union dlb2_lsp_cq_ldb_infl_cnt {
> +       struct {
> +               u32 count : 12;
> +               u32 rsvd0 : 20;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_CQ_LDB_INFL_LIM(x) \
> +       (0xa0500000 + (x) * 0x1000)
> +#define DLB2_LSP_CQ_LDB_INFL_LIM_RST 0x0
> +union dlb2_lsp_cq_ldb_infl_lim {
> +       struct {
> +               u32 limit : 12;
> +               u32 rsvd0 : 20;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_CQ_LDB_TKN_CNT(x) \
> +       (0xa0580000 + (x) * 0x1000)
> +#define DLB2_LSP_CQ_LDB_TKN_CNT_RST 0x0
> +union dlb2_lsp_cq_ldb_tkn_cnt {
> +       struct {
> +               u32 token_count : 11;
> +               u32 rsvd0 : 21;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
> +       (0xa0600000 + (x) * 0x1000)
> +#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST 0x0
> +union dlb2_lsp_cq_ldb_tkn_depth_sel {
> +       struct {
> +               u32 token_depth_select : 4;
> +               u32 ignore_depth : 1;
> +               u32 rsvd0 : 27;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(x) \
> +       (0xa0680000 + (x) * 0x1000)
> +#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST 0x0
> +union dlb2_lsp_cq_ldb_tot_sch_cntl {
> +       struct {
> +               u32 count : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(x) \
> +       (0xa0700000 + (x) * 0x1000)
> +#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST 0x0
> +union dlb2_lsp_cq_ldb_tot_sch_cnth {
> +       struct {
> +               u32 count : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_QID_DIR_MAX_DEPTH(x) \
> +       (0xa0780000 + (x) * 0x1000)
> +#define DLB2_LSP_QID_DIR_MAX_DEPTH_RST 0x0
> +union dlb2_lsp_qid_dir_max_depth {
> +       struct {
> +               u32 depth : 13;
> +               u32 rsvd0 : 19;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(x) \
> +       (0xa0800000 + (x) * 0x1000)
> +#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST 0x0
> +union dlb2_lsp_qid_dir_tot_enq_cntl {
> +       struct {
> +               u32 count : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(x) \
> +       (0xa0880000 + (x) * 0x1000)
> +#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST 0x0
> +union dlb2_lsp_qid_dir_tot_enq_cnth {
> +       struct {
> +               u32 count : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_QID_DIR_ENQUEUE_CNT(x) \
> +       (0xa0900000 + (x) * 0x1000)
> +#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RST 0x0
> +union dlb2_lsp_qid_dir_enqueue_cnt {
> +       struct {
> +               u32 count : 13;
> +               u32 rsvd0 : 19;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_QID_DIR_DEPTH_THRSH(x) \
> +       (0xa0980000 + (x) * 0x1000)
> +#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RST 0x0
> +union dlb2_lsp_qid_dir_depth_thrsh {
> +       struct {
> +               u32 thresh : 13;
> +               u32 rsvd0 : 19;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_QID_AQED_ACTIVE_CNT(x) \
> +       (0xa0a00000 + (x) * 0x1000)
> +#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RST 0x0
> +union dlb2_lsp_qid_aqed_active_cnt {
> +       struct {
> +               u32 count : 12;
> +               u32 rsvd0 : 20;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_QID_AQED_ACTIVE_LIM(x) \
> +       (0xa0a80000 + (x) * 0x1000)
> +#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RST 0x0
> +union dlb2_lsp_qid_aqed_active_lim {
> +       struct {
> +               u32 limit : 12;
> +               u32 rsvd0 : 20;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(x) \
> +       (0xa0b00000 + (x) * 0x1000)
> +#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST 0x0
> +union dlb2_lsp_qid_atm_tot_enq_cntl {
> +       struct {
> +               u32 count : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(x) \
> +       (0xa0b80000 + (x) * 0x1000)
> +#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST 0x0
> +union dlb2_lsp_qid_atm_tot_enq_cnth {
> +       struct {
> +               u32 count : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_QID_ATQ_ENQUEUE_CNT(x) \
> +       (0xa0c00000 + (x) * 0x1000)
> +#define DLB2_LSP_QID_ATQ_ENQUEUE_CNT_RST 0x0
> +union dlb2_lsp_qid_atq_enqueue_cnt {
> +       struct {
> +               u32 count : 14;
> +               u32 rsvd0 : 18;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_QID_LDB_ENQUEUE_CNT(x) \
> +       (0xa0c80000 + (x) * 0x1000)
> +#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RST 0x0
> +union dlb2_lsp_qid_ldb_enqueue_cnt {
> +       struct {
> +               u32 count : 14;
> +               u32 rsvd0 : 18;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_QID_LDB_INFL_CNT(x) \
> +       (0xa0d00000 + (x) * 0x1000)
> +#define DLB2_LSP_QID_LDB_INFL_CNT_RST 0x0
> +union dlb2_lsp_qid_ldb_infl_cnt {
> +       struct {
> +               u32 count : 12;
> +               u32 rsvd0 : 20;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_QID_LDB_INFL_LIM(x) \
> +       (0xa0d80000 + (x) * 0x1000)
> +#define DLB2_LSP_QID_LDB_INFL_LIM_RST 0x0
> +union dlb2_lsp_qid_ldb_infl_lim {
> +       struct {
> +               u32 limit : 12;
> +               u32 rsvd0 : 20;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_QID2CQIDIX_00(x) \
> +       (0xa0e00000 + (x) * 0x1000)
> +#define DLB2_LSP_QID2CQIDIX_00_RST 0x0
> +#define DLB2_LSP_QID2CQIDIX(x, y) \
> +       (DLB2_LSP_QID2CQIDIX_00(x) + 0x80000 * (y))
> +#define DLB2_LSP_QID2CQIDIX_NUM 16
> +union dlb2_lsp_qid2cqidix_00 {
> +       struct {
> +               u32 cq_p0 : 8;
> +               u32 cq_p1 : 8;
> +               u32 cq_p2 : 8;
> +               u32 cq_p3 : 8;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_QID2CQIDIX2_00(x) \
> +       (0xa1600000 + (x) * 0x1000)
> +#define DLB2_LSP_QID2CQIDIX2_00_RST 0x0
> +#define DLB2_LSP_QID2CQIDIX2(x, y) \
> +       (DLB2_LSP_QID2CQIDIX2_00(x) + 0x80000 * (y))
> +#define DLB2_LSP_QID2CQIDIX2_NUM 16
> +union dlb2_lsp_qid2cqidix2_00 {
> +       struct {
> +               u32 cq_p0 : 8;
> +               u32 cq_p1 : 8;
> +               u32 cq_p2 : 8;
> +               u32 cq_p3 : 8;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_QID_LDB_REPLAY_CNT(x) \
> +       (0xa1e00000 + (x) * 0x1000)
> +#define DLB2_LSP_QID_LDB_REPLAY_CNT_RST 0x0
> +union dlb2_lsp_qid_ldb_replay_cnt {
> +       struct {
> +               u32 count : 14;
> +               u32 rsvd0 : 18;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_QID_NALDB_MAX_DEPTH(x) \
> +       (0xa1f00000 + (x) * 0x1000)
> +#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RST 0x0
> +union dlb2_lsp_qid_naldb_max_depth {
> +       struct {
> +               u32 depth : 14;
> +               u32 rsvd0 : 18;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
> +       (0xa1f80000 + (x) * 0x1000)
> +#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST 0x0
> +union dlb2_lsp_qid_naldb_tot_enq_cntl {
> +       struct {
> +               u32 count : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
> +       (0xa2000000 + (x) * 0x1000)
> +#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST 0x0
> +union dlb2_lsp_qid_naldb_tot_enq_cnth {
> +       struct {
> +               u32 count : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_QID_ATM_DEPTH_THRSH(x) \
> +       (0xa2080000 + (x) * 0x1000)
> +#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RST 0x0
> +union dlb2_lsp_qid_atm_depth_thrsh {
> +       struct {
> +               u32 thresh : 14;
> +               u32 rsvd0 : 18;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_QID_NALDB_DEPTH_THRSH(x) \
> +       (0xa2100000 + (x) * 0x1000)
> +#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST 0x0
> +union dlb2_lsp_qid_naldb_depth_thrsh {
> +       struct {
> +               u32 thresh : 14;
> +               u32 rsvd0 : 18;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_QID_ATM_ACTIVE(x) \
> +       (0xa2180000 + (x) * 0x1000)
> +#define DLB2_LSP_QID_ATM_ACTIVE_RST 0x0
> +union dlb2_lsp_qid_atm_active {
> +       struct {
> +               u32 count : 14;
> +               u32 rsvd0 : 18;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0xa4000008
> +#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_RST 0x0
> +union dlb2_lsp_cfg_arb_weight_atm_nalb_qid_0 {
> +       struct {
> +               u32 pri0_weight : 8;
> +               u32 pri1_weight : 8;
> +               u32 pri2_weight : 8;
> +               u32 pri3_weight : 8;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0xa400000c
> +#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RST 0x0
> +union dlb2_lsp_cfg_arb_weight_atm_nalb_qid_1 {
> +       struct {
> +               u32 rsvz0 : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0 0xa4000014
> +#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_RST 0x0
> +union dlb2_lsp_cfg_arb_weight_ldb_qid_0 {
> +       struct {
> +               u32 pri0_weight : 8;
> +               u32 pri1_weight : 8;
> +               u32 pri2_weight : 8;
> +               u32 pri3_weight : 8;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1 0xa4000018
> +#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RST 0x0
> +union dlb2_lsp_cfg_arb_weight_ldb_qid_1 {
> +       struct {
> +               u32 rsvz0 : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_LDB_SCHED_CTRL 0xa400002c
> +#define DLB2_LSP_LDB_SCHED_CTRL_RST 0x0
> +union dlb2_lsp_ldb_sched_ctrl {
> +       struct {
> +               u32 cq : 8;
> +               u32 qidix : 3;
> +               u32 value : 1;
> +               u32 nalb_haswork_v : 1;
> +               u32 rlist_haswork_v : 1;
> +               u32 slist_haswork_v : 1;
> +               u32 inflight_ok_v : 1;
> +               u32 aqed_nfull_v : 1;
> +               u32 rsvz0 : 15;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_DIR_SCH_CNT_L 0xa4000034
> +#define DLB2_LSP_DIR_SCH_CNT_L_RST 0x0
> +union dlb2_lsp_dir_sch_cnt_l {
> +       struct {
> +               u32 count : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_DIR_SCH_CNT_H 0xa4000038
> +#define DLB2_LSP_DIR_SCH_CNT_H_RST 0x0
> +union dlb2_lsp_dir_sch_cnt_h {
> +       struct {
> +               u32 count : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_LDB_SCH_CNT_L 0xa400003c
> +#define DLB2_LSP_LDB_SCH_CNT_L_RST 0x0
> +union dlb2_lsp_ldb_sch_cnt_l {
> +       struct {
> +               u32 count : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_LDB_SCH_CNT_H 0xa4000040
> +#define DLB2_LSP_LDB_SCH_CNT_H_RST 0x0
> +union dlb2_lsp_ldb_sch_cnt_h {
> +       struct {
> +               u32 count : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_CFG_SHDW_CTRL 0xa4000070
> +#define DLB2_LSP_CFG_SHDW_CTRL_RST 0x0
> +union dlb2_lsp_cfg_shdw_ctrl {
> +       struct {
> +               u32 transfer : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_CFG_SHDW_RANGE_COS(x) \
> +       (0xa4000074 + (x) * 4)
> +#define DLB2_LSP_CFG_SHDW_RANGE_COS_RST 0x40
> +union dlb2_lsp_cfg_shdw_range_cos {
> +       struct {
> +               u32 bw_range : 9;
> +               u32 rsvz0 : 22;
> +               u32 no_extra_credit : 1;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_LSP_CFG_CTRL_GENERAL_0 0xac000000
> +#define DLB2_LSP_CFG_CTRL_GENERAL_0_RST 0x0
> +union dlb2_lsp_cfg_ctrl_general_0 {
> +       struct {
> +               u32 disab_atq_empty_arb : 1;
> +               u32 inc_tok_unit_idle : 1;
> +               u32 disab_rlist_pri : 1;
> +               u32 inc_cmp_unit_idle : 1;
> +               u32 rsvz0 : 2;
> +               u32 dir_single_op : 1;
> +               u32 dir_half_bw : 1;
> +               u32 dir_single_out : 1;
> +               u32 dir_disab_multi : 1;
> +               u32 atq_single_op : 1;
> +               u32 atq_half_bw : 1;
> +               u32 atq_single_out : 1;
> +               u32 atq_disab_multi : 1;
> +               u32 dirrpl_single_op : 1;
> +               u32 dirrpl_half_bw : 1;
> +               u32 dirrpl_single_out : 1;
> +               u32 lbrpl_single_op : 1;
> +               u32 lbrpl_half_bw : 1;
> +               u32 lbrpl_single_out : 1;
> +               u32 ldb_single_op : 1;
> +               u32 ldb_half_bw : 1;
> +               u32 ldb_disab_multi : 1;
> +               u32 atm_single_sch : 1;
> +               u32 atm_single_cmp : 1;
> +               u32 ldb_ce_tog_arb : 1;
> +               u32 rsvz1 : 1;
> +               u32 smon0_valid_sel : 2;
> +               u32 smon0_value_sel : 1;
> +               u32 smon0_compare_sel : 2;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CFG_MSTR_DIAG_RESET_STS 0xb4000000
> +#define DLB2_CFG_MSTR_DIAG_RESET_STS_RST 0x80000bff
> +union dlb2_cfg_mstr_diag_reset_sts {
> +       struct {
> +               u32 chp_pf_reset_done : 1;
> +               u32 rop_pf_reset_done : 1;
> +               u32 lsp_pf_reset_done : 1;
> +               u32 nalb_pf_reset_done : 1;
> +               u32 ap_pf_reset_done : 1;
> +               u32 dp_pf_reset_done : 1;
> +               u32 qed_pf_reset_done : 1;
> +               u32 dqed_pf_reset_done : 1;
> +               u32 aqed_pf_reset_done : 1;
> +               u32 sys_pf_reset_done : 1;
> +               u32 pf_reset_active : 1;
> +               u32 flrsm_state : 7;
> +               u32 rsvd0 : 13;
> +               u32 dlb_proc_reset_done : 1;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CFG_MSTR_CFG_DIAGNOSTIC_IDLE_STATUS 0xb4000004
> +#define DLB2_CFG_MSTR_CFG_DIAGNOSTIC_IDLE_STATUS_RST 0x9d0fffff
> +union dlb2_cfg_mstr_cfg_diagnostic_idle_status {
> +       struct {
> +               u32 chp_pipeidle : 1;
> +               u32 rop_pipeidle : 1;
> +               u32 lsp_pipeidle : 1;
> +               u32 nalb_pipeidle : 1;
> +               u32 ap_pipeidle : 1;
> +               u32 dp_pipeidle : 1;
> +               u32 qed_pipeidle : 1;
> +               u32 dqed_pipeidle : 1;
> +               u32 aqed_pipeidle : 1;
> +               u32 sys_pipeidle : 1;
> +               u32 chp_unit_idle : 1;
> +               u32 rop_unit_idle : 1;
> +               u32 lsp_unit_idle : 1;
> +               u32 nalb_unit_idle : 1;
> +               u32 ap_unit_idle : 1;
> +               u32 dp_unit_idle : 1;
> +               u32 qed_unit_idle : 1;
> +               u32 dqed_unit_idle : 1;
> +               u32 aqed_unit_idle : 1;
> +               u32 sys_unit_idle : 1;
> +               u32 rsvd1 : 4;
> +               u32 mstr_cfg_ring_idle : 1;
> +               u32 mstr_cfg_mstr_idle : 1;
> +               u32 mstr_flr_clkreq_b : 1;
> +               u32 mstr_proc_idle : 1;
> +               u32 mstr_proc_idle_masked : 1;
> +               u32 rsvd0 : 2;
> +               u32 dlb_func_idle : 1;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CFG_MSTR_CFG_PM_STATUS 0xb4000014
> +#define DLB2_CFG_MSTR_CFG_PM_STATUS_RST 0x100403e
> +union dlb2_cfg_mstr_cfg_pm_status {
> +       struct {
> +               u32 prochot : 1;
> +               u32 pgcb_dlb_idle : 1;
> +               u32 pgcb_dlb_pg_rdy_ack_b : 1;
> +               u32 pmsm_pgcb_req_b : 1;
> +               u32 pgbc_pmc_pg_req_b : 1;
> +               u32 pmc_pgcb_pg_ack_b : 1;
> +               u32 pmc_pgcb_fet_en_b : 1;
> +               u32 pgcb_fet_en_b : 1;
> +               u32 rsvz0 : 1;
> +               u32 rsvz1 : 1;
> +               u32 fuse_force_on : 1;
> +               u32 fuse_proc_disable : 1;
> +               u32 rsvz2 : 1;
> +               u32 rsvz3 : 1;
> +               u32 pm_fsm_d0tod3_ok : 1;
> +               u32 pm_fsm_d3tod0_ok : 1;
> +               u32 dlb_in_d3 : 1;
> +               u32 rsvz4 : 7;
> +               u32 pmsm : 8;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE 0xb4000018
> +#define DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE_RST 0x1
> +union dlb2_cfg_mstr_cfg_pm_pmcsr_disable {
> +       struct {
> +               u32 disable : 1;
> +               u32 rsvz0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_FUNC_VF_VF2PF_MAILBOX_BYTES 256
> +#define DLB2_FUNC_VF_VF2PF_MAILBOX(x) \
> +       (0x1000 + (x) * 0x4)
> +#define DLB2_FUNC_VF_VF2PF_MAILBOX_RST 0x0
> +union dlb2_func_vf_vf2pf_mailbox {
> +       struct {
> +               u32 msg : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_FUNC_VF_VF2PF_MAILBOX_ISR 0x1f00
> +#define DLB2_FUNC_VF_VF2PF_MAILBOX_ISR_RST 0x0
> +#define DLB2_FUNC_VF_SIOV_VF2PF_MAILBOX_ISR_TRIGGER 0x8000
> +union dlb2_func_vf_vf2pf_mailbox_isr {
> +       struct {
> +               u32 isr : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_FUNC_VF_PF2VF_MAILBOX_BYTES 64
> +#define DLB2_FUNC_VF_PF2VF_MAILBOX(x) \
> +       (0x2000 + (x) * 0x4)
> +#define DLB2_FUNC_VF_PF2VF_MAILBOX_RST 0x0
> +union dlb2_func_vf_pf2vf_mailbox {
> +       struct {
> +               u32 msg : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_FUNC_VF_PF2VF_MAILBOX_ISR 0x2f00
> +#define DLB2_FUNC_VF_PF2VF_MAILBOX_ISR_RST 0x0
> +union dlb2_func_vf_pf2vf_mailbox_isr {
> +       struct {
> +               u32 pf_isr : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_FUNC_VF_VF_MSI_ISR_PEND 0x2f10
> +#define DLB2_FUNC_VF_VF_MSI_ISR_PEND_RST 0x0
> +union dlb2_func_vf_vf_msi_isr_pend {
> +       struct {
> +               u32 isr_pend : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_FUNC_VF_VF_RESET_IN_PROGRESS 0x3000
> +#define DLB2_FUNC_VF_VF_RESET_IN_PROGRESS_RST 0x1
> +union dlb2_func_vf_vf_reset_in_progress {
> +       struct {
> +               u32 reset_in_progress : 1;
> +               u32 rsvd0 : 31;
> +       } field;
> +       u32 val;
> +};
> +
> +#define DLB2_FUNC_VF_VF_MSI_ISR 0x4000
> +#define DLB2_FUNC_VF_VF_MSI_ISR_RST 0x0
> +union dlb2_func_vf_vf_msi_isr {
> +       struct {
> +               u32 vf_msi_isr : 32;
> +       } field;
> +       u32 val;
> +};
> +
> +#endif /* __DLB2_REGS_H */
> diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c
> new file mode 100644
> index 0000000..6de8b95
> --- /dev/null
> +++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
> @@ -0,0 +1,274 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2016-2020 Intel Corporation
> + */
> +
> +#include "dlb2_user.h"
> +
> +#include "dlb2_hw_types.h"
> +#include "dlb2_mbox.h"
> +#include "dlb2_osdep.h"
> +#include "dlb2_osdep_bitmap.h"
> +#include "dlb2_osdep_types.h"
> +#include "dlb2_regs.h"
> +#include "dlb2_resource.h"
> +
> +static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
> +{
> +       int i;
> +
> +       dlb2_list_init_head(&domain->used_ldb_queues);
> +       dlb2_list_init_head(&domain->used_dir_pq_pairs);
> +       dlb2_list_init_head(&domain->avail_ldb_queues);
> +       dlb2_list_init_head(&domain->avail_dir_pq_pairs);
> +
> +       for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
> +               dlb2_list_init_head(&domain->used_ldb_ports[i]);
> +       for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
> +               dlb2_list_init_head(&domain->avail_ldb_ports[i]);
> +}
> +
> +static void dlb2_init_fn_rsrc_lists(struct dlb2_function_resources *rsrc)
> +{
> +       int i;
> +
> +       dlb2_list_init_head(&rsrc->avail_domains);
> +       dlb2_list_init_head(&rsrc->used_domains);
> +       dlb2_list_init_head(&rsrc->avail_ldb_queues);
> +       dlb2_list_init_head(&rsrc->avail_dir_pq_pairs);
> +
> +       for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
> +               dlb2_list_init_head(&rsrc->avail_ldb_ports[i]);
> +}
> +
> +void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
> +{
> +       union dlb2_chp_cfg_chp_csr_ctrl r0;
> +
> +       r0.val = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
> +
> +       r0.field.cfg_64bytes_qe_dir_cq_mode = 1;
> +
> +       DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
> +}
> +
> +int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
> +                             struct dlb2_get_num_resources_args *arg,
> +                             bool vdev_req,
> +                             unsigned int vdev_id)
> +{
> +       struct dlb2_function_resources *rsrcs;
> +       struct dlb2_bitmap *map;
> +       int i;
> +
> +       if (vdev_req && vdev_id >= DLB2_MAX_NUM_VDEVS)
> +               return -EINVAL;
> +
> +       if (vdev_req)
> +               rsrcs = &hw->vdev[vdev_id];
> +       else
> +               rsrcs = &hw->pf;
> +
> +       arg->num_sched_domains = rsrcs->num_avail_domains;
> +
> +       arg->num_ldb_queues = rsrcs->num_avail_ldb_queues;
> +
> +       arg->num_ldb_ports = 0;
> +       for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
> +               arg->num_ldb_ports += rsrcs->num_avail_ldb_ports[i];
> +
> +       arg->num_cos_ldb_ports[0] = rsrcs->num_avail_ldb_ports[0];
> +       arg->num_cos_ldb_ports[1] = rsrcs->num_avail_ldb_ports[1];
> +       arg->num_cos_ldb_ports[2] = rsrcs->num_avail_ldb_ports[2];
> +       arg->num_cos_ldb_ports[3] = rsrcs->num_avail_ldb_ports[3];
> +
> +       arg->num_dir_ports = rsrcs->num_avail_dir_pq_pairs;
> +
> +       arg->num_atomic_inflights = rsrcs->num_avail_aqed_entries;
> +
> +       map = rsrcs->avail_hist_list_entries;
> +
> +       arg->num_hist_list_entries = dlb2_bitmap_count(map);
> +
> +       arg->max_contiguous_hist_list_entries =
> +               dlb2_bitmap_longest_set_range(map);
> +
> +       arg->num_ldb_credits = rsrcs->num_avail_qed_entries;
> +
> +       arg->num_dir_credits = rsrcs->num_avail_dqed_entries;
> +
> +       return 0;
> +}
> +
> +void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
> +{
> +       union dlb2_chp_cfg_chp_csr_ctrl r0;
> +
> +       r0.val = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
> +
> +       r0.field.cfg_64bytes_qe_ldb_cq_mode = 1;
> +
> +       DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
> +}
> +
> +void dlb2_resource_free(struct dlb2_hw *hw)
> +{
> +       int i;
> +
> +       if (hw->pf.avail_hist_list_entries)
> +               dlb2_bitmap_free(hw->pf.avail_hist_list_entries);
> +
> +       for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
> +               if (hw->vdev[i].avail_hist_list_entries)
> +                       dlb2_bitmap_free(hw->vdev[i].avail_hist_list_entries);
> +       }
> +}
> +
> +int dlb2_resource_init(struct dlb2_hw *hw)
> +{
> +       struct dlb2_list_entry *list;
> +       unsigned int i;
> +       int ret;
> +
> +       /*
> +        * For optimal load-balancing, ports that map to one or more QIDs in
> +        * common should not be in numerical sequence. This is application
> +        * dependent, but the driver interleaves port IDs as much as possible
> +        * to reduce the likelihood of this. This initial allocation maximizes
> +        * the average distance between an ID and its immediate neighbors (i.e.
> +        * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to
> +        * 3, etc.).
> +        */
> +       u8 init_ldb_port_allocation[DLB2_MAX_NUM_LDB_PORTS] = {
> +               0,  7,  14,  5, 12,  3, 10,  1,  8, 15,  6, 13,  4, 11,  2,  9,
> +               16, 23, 30, 21, 28, 19, 26, 17, 24, 31, 22, 29, 20, 27, 18, 25,
> +               32, 39, 46, 37, 44, 35, 42, 33, 40, 47, 38, 45, 36, 43, 34, 41,
> +               48, 55, 62, 53, 60, 51, 58, 49, 56, 63, 54, 61, 52, 59, 50, 57,
> +       };
> +
> +       /* Zero-out resource tracking data structures */
> +       memset(&hw->rsrcs, 0, sizeof(hw->rsrcs));
> +       memset(&hw->pf, 0, sizeof(hw->pf));
> +
> +       dlb2_init_fn_rsrc_lists(&hw->pf);
> +
> +       for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
> +               memset(&hw->vdev[i], 0, sizeof(hw->vdev[i]));
> +               dlb2_init_fn_rsrc_lists(&hw->vdev[i]);
> +       }
> +
> +       for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
> +               memset(&hw->domains[i], 0, sizeof(hw->domains[i]));
> +               dlb2_init_domain_rsrc_lists(&hw->domains[i]);
> +               hw->domains[i].parent_func = &hw->pf;
> +       }
> +
> +       /* Give all resources to the PF driver */
> +       hw->pf.num_avail_domains = DLB2_MAX_NUM_DOMAINS;
> +       for (i = 0; i < hw->pf.num_avail_domains; i++) {
> +               list = &hw->domains[i].func_list;
> +
> +               dlb2_list_add(&hw->pf.avail_domains, list);
> +       }
> +
> +       hw->pf.num_avail_ldb_queues = DLB2_MAX_NUM_LDB_QUEUES;
> +       for (i = 0; i < hw->pf.num_avail_ldb_queues; i++) {
> +               list = &hw->rsrcs.ldb_queues[i].func_list;
> +
> +               dlb2_list_add(&hw->pf.avail_ldb_queues, list);
> +       }
> +
> +       for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
> +               hw->pf.num_avail_ldb_ports[i] =
> +                       DLB2_MAX_NUM_LDB_PORTS / DLB2_NUM_COS_DOMAINS;
> +
> +       for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
> +               int cos_id = i >> DLB2_NUM_COS_DOMAINS;
> +               struct dlb2_ldb_port *port;
> +
> +               port = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]];
> +
> +               dlb2_list_add(&hw->pf.avail_ldb_ports[cos_id],
> +                             &port->func_list);
> +       }
> +
> +       hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS;
> +       for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
> +               list = &hw->rsrcs.dir_pq_pairs[i].func_list;
> +
> +               dlb2_list_add(&hw->pf.avail_dir_pq_pairs, list);
> +       }
> +
> +       hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
> +       hw->pf.num_avail_dqed_entries = DLB2_MAX_NUM_DIR_CREDITS;
> +       hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
> +
> +       ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
> +                               DLB2_MAX_NUM_HIST_LIST_ENTRIES);
> +       if (ret)
> +               goto unwind;
> +
> +       ret = dlb2_bitmap_fill(hw->pf.avail_hist_list_entries);
> +       if (ret)
> +               goto unwind;
> +
> +       for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
> +               ret = dlb2_bitmap_alloc(&hw->vdev[i].avail_hist_list_entries,
> +                                       DLB2_MAX_NUM_HIST_LIST_ENTRIES);
> +               if (ret)
> +                       goto unwind;
> +
> +               ret = dlb2_bitmap_zero(hw->vdev[i].avail_hist_list_entries);
> +               if (ret)
> +                       goto unwind;
> +       }
> +
> +       /* Initialize the hardware resource IDs */
> +       for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
> +               hw->domains[i].id.phys_id = i;
> +               hw->domains[i].id.vdev_owned = false;
> +       }
> +
> +       for (i = 0; i < DLB2_MAX_NUM_LDB_QUEUES; i++) {
> +               hw->rsrcs.ldb_queues[i].id.phys_id = i;
> +               hw->rsrcs.ldb_queues[i].id.vdev_owned = false;
> +       }
> +
> +       for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
> +               hw->rsrcs.ldb_ports[i].id.phys_id = i;
> +               hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
> +       }
> +
> +       for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS; i++) {
> +               hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
> +               hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
> +       }
> +
> +       for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
> +               hw->rsrcs.sn_groups[i].id = i;
> +               /* Default mode (0) is 64 sequence numbers per queue */
> +               hw->rsrcs.sn_groups[i].mode = 0;
> +               hw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 64;
> +               hw->rsrcs.sn_groups[i].slot_use_bitmap = 0;
> +       }
> +
> +       for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
> +               hw->cos_reservation[i] = 100 / DLB2_NUM_COS_DOMAINS;
> +
> +       return 0;
> +
> +unwind:
> +       dlb2_resource_free(hw);
> +
> +       return ret;
> +}
> +
> +void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw)
> +{
> +       union dlb2_cfg_mstr_cfg_pm_pmcsr_disable r0;
> +
> +       r0.val = DLB2_CSR_RD(hw, DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE);
> +
> +       r0.field.disable = 0;
> +
> +       DLB2_CSR_WR(hw, DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE, r0.val);
> +}
> diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.h b/drivers/event/dlb2/pf/base/dlb2_resource.h
> new file mode 100644
> index 0000000..503fdf3
> --- /dev/null
> +++ b/drivers/event/dlb2/pf/base/dlb2_resource.h
> @@ -0,0 +1,1913 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2016-2020 Intel Corporation
> + */
> +
> +#ifndef __DLB2_RESOURCE_H
> +#define __DLB2_RESOURCE_H
> +
> +#include "dlb2_user.h"
> +
> +#include "dlb2_hw_types.h"
> +#include "dlb2_osdep_types.h"
> +
> +/**
> + * dlb2_resource_init() - initialize the device
> + * @hw: pointer to struct dlb2_hw.
> + *
> + * This function initializes the device's software state (pointed to by the hw
> + * argument) and programs global scheduling QoS registers. This function should
> + * be called during driver initialization.
> + *
> + * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
> + * device is reset.
> + *
> + * Return:
> + * Returns 0 upon success, <0 otherwise.
> + */
> +int dlb2_resource_init(struct dlb2_hw *hw);
> +
> +/**
> + * dlb2_resource_free() - free device state memory
> + * @hw: dlb2_hw handle for a particular device.
> + *
> + * This function frees software state pointed to by dlb2_hw. This function
> + * should be called when resetting the device or unloading the driver.
> + */
> +void dlb2_resource_free(struct dlb2_hw *hw);
> +
> +/**
> + * dlb2_resource_reset() - reset in-use resources to their initial state
> + * @hw: dlb2_hw handle for a particular device.
> + *
> + * This function resets in-use resources, and makes them available for use.
> + * All resources go back to their owning function, whether a PF or a VF.
> + */
> +void dlb2_resource_reset(struct dlb2_hw *hw);
> +
> +/**
> + * dlb2_hw_create_sched_domain() - create a scheduling domain
> + * @hw: dlb2_hw handle for a particular device.
> + * @args: scheduling domain creation arguments.
> + * @resp: response structure.
> + * @vdev_request: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> + *
> + * This function creates a scheduling domain containing the resources specified
> + * in args. The individual resources (queues, ports, credits) can be configured
> + * after creating a scheduling domain.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> + * assigned a detailed error code from enum dlb2_error. If successful, resp->id
> + * contains the domain ID.
> + *
> + * resp->id contains a virtual ID if vdev_request is true.
> + *
> + * Errors:
> + * EINVAL - A requested resource is unavailable, or the requested domain name
> + *         is already in use.
> + * EFAULT - Internal error (resp->status not set).
> + */
> +int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
> +                               struct dlb2_create_sched_domain_args *args,
> +                               struct dlb2_cmd_response *resp,
> +                               bool vdev_request,
> +                               unsigned int vdev_id);
> +
> +/**
> + * dlb2_hw_create_ldb_queue() - create a load-balanced queue
> + * @hw: dlb2_hw handle for a particular device.
> + * @domain_id: domain ID.
> + * @args: queue creation arguments.
> + * @resp: response structure.
> + * @vdev_request: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> + *
> + * This function creates a load-balanced queue.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> + * assigned a detailed error code from enum dlb2_error. If successful, resp->id
> + * contains the queue ID.
> + *
> + * resp->id contains a virtual ID if vdev_request is true.
> + *
> + * Errors:
> + * EINVAL - A requested resource is unavailable, the domain is not configured,
> + *         the domain has already been started, or the requested queue name is
> + *         already in use.
> + * EFAULT - Internal error (resp->status not set).
> + */
> +int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
> +                            u32 domain_id,
> +                            struct dlb2_create_ldb_queue_args *args,
> +                            struct dlb2_cmd_response *resp,
> +                            bool vdev_request,
> +                            unsigned int vdev_id);
> +
> +/**
> + * dlb2_hw_create_dir_queue() - create a directed queue
> + * @hw: dlb2_hw handle for a particular device.
> + * @domain_id: domain ID.
> + * @args: queue creation arguments.
> + * @resp: response structure.
> + * @vdev_request: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> + *
> + * This function creates a directed queue.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> + * assigned a detailed error code from enum dlb2_error. If successful, resp->id
> + * contains the queue ID.
> + *
> + * resp->id contains a virtual ID if vdev_request is true.
> + *
> + * Errors:
> + * EINVAL - A requested resource is unavailable, the domain is not configured,
> + *         or the domain has already been started.
> + * EFAULT - Internal error (resp->status not set).
> + */
> +int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
> +                            u32 domain_id,
> +                            struct dlb2_create_dir_queue_args *args,
> +                            struct dlb2_cmd_response *resp,
> +                            bool vdev_request,
> +                            unsigned int vdev_id);
> +
> +/**
> + * dlb2_hw_create_dir_port() - create a directed port
> + * @hw: dlb2_hw handle for a particular device.
> + * @domain_id: domain ID.
> + * @args: port creation arguments.
> + * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
> + * @resp: response structure.
> + * @vdev_request: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> + *
> + * This function creates a directed port.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> + * assigned a detailed error code from enum dlb2_error. If successful, resp->id
> + * contains the port ID.
> + *
> + * resp->id contains a virtual ID if vdev_request is true.
> + *
> + * Errors:
> + * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
> + *         pointer address is not properly aligned, the domain is not
> + *         configured, or the domain has already been started.
> + * EFAULT - Internal error (resp->status not set).
> + */
> +int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
> +                           u32 domain_id,
> +                           struct dlb2_create_dir_port_args *args,
> +                           uintptr_t cq_dma_base,
> +                           struct dlb2_cmd_response *resp,
> +                           bool vdev_request,
> +                           unsigned int vdev_id);
> +
> +/**
> + * dlb2_hw_create_ldb_port() - create a load-balanced port
> + * @hw: dlb2_hw handle for a particular device.
> + * @domain_id: domain ID.
> + * @args: port creation arguments.
> + * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
> + * @resp: response structure.
> + * @vdev_request: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> + *
> + * This function creates a load-balanced port.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> + * assigned a detailed error code from enum dlb2_error. If successful, resp->id
> + * contains the port ID.
> + *
> + * resp->id contains a virtual ID if vdev_request is true.
> + *
> + * Errors:
> + * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
> + *         pointer address is not properly aligned, the domain is not
> + *         configured, or the domain has already been started.
> + * EFAULT - Internal error (resp->status not set).
> + */
> +int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
> +                           u32 domain_id,
> +                           struct dlb2_create_ldb_port_args *args,
> +                           uintptr_t cq_dma_base,
> +                           struct dlb2_cmd_response *resp,
> +                           bool vdev_request,
> +                           unsigned int vdev_id);
> +
> +/**
> + * dlb2_hw_start_domain() - start a scheduling domain
> + * @hw: dlb2_hw handle for a particular device.
> + * @domain_id: domain ID.
> + * @args: start domain arguments.
> + * @resp: response structure.
> + * @vdev_request: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> + *
> + * This function starts a scheduling domain, which allows applications to send
> + * traffic through it. Once a domain is started, its resources can no longer be
> + * configured (besides QID remapping and port enable/disable).
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> + * assigned a detailed error code from enum dlb2_error.
> + *
> + * Errors:
> + * EINVAL - the domain is not configured, or the domain is already started.
> + */
> +int dlb2_hw_start_domain(struct dlb2_hw *hw,
> +                        u32 domain_id,
> +                        struct dlb2_start_domain_args *args,
> +                        struct dlb2_cmd_response *resp,
> +                        bool vdev_request,
> +                        unsigned int vdev_id);
> +
> +/**
> + * dlb2_hw_map_qid() - map a load-balanced queue to a load-balanced port
> + * @hw: dlb2_hw handle for a particular device.
> + * @domain_id: domain ID.
> + * @args: map QID arguments.
> + * @resp: response structure.
> + * @vdev_request: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> + *
> + * This function configures the DLB to schedule QEs from the specified queue
> + * to the specified port. Each load-balanced port can be mapped to up to 8
> + * queues; each load-balanced queue can potentially map to all the
> + * load-balanced ports.
> + *
> + * A successful return does not necessarily mean the mapping was configured. If
> + * this function is unable to immediately map the queue to the port, it will
> + * add the requested operation to a per-port list of pending map/unmap
> + * operations, and (if it's not already running) launch a kernel thread that
> + * periodically attempts to process all pending operations. In a sense, this is
> + * an asynchronous function.
> + *
> + * This asynchronicity creates two views of the state of hardware: the actual
> + * hardware state and the requested state (as if every request completed
> + * immediately). If there are any pending map/unmap operations, the requested
> + * state will differ from the actual state. All validation is performed with
> + * respect to the pending state; for instance, if there are 8 pending map
> + * operations for port X, a request for a 9th will fail because a load-balanced
> + * port can only map up to 8 queues.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> + * assigned a detailed error code from enum dlb2_error.
> + *
> + * Errors:
> + * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
> + *         the domain is not configured.
> + * EFAULT - Internal error (resp->status not set).
> + */
> +int dlb2_hw_map_qid(struct dlb2_hw *hw,
> +                   u32 domain_id,
> +                   struct dlb2_map_qid_args *args,
> +                   struct dlb2_cmd_response *resp,
> +                   bool vdev_request,
> +                   unsigned int vdev_id);
> +
> +/**
> + * dlb2_hw_unmap_qid() - Unmap a load-balanced queue from a load-balanced port
> + * @hw: dlb2_hw handle for a particular device.
> + * @domain_id: domain ID.
> + * @args: unmap QID arguments.
> + * @resp: response structure.
> + * @vdev_request: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> + *
> + * This function configures the DLB to stop scheduling QEs from the specified
> + * queue to the specified port.
> + *
> + * A successful return does not necessarily mean the mapping was removed. If
> + * this function is unable to immediately unmap the queue from the port, it
> + * will add the requested operation to a per-port list of pending map/unmap
> + * operations, and (if it's not already running) launch a kernel thread that
> + * periodically attempts to process all pending operations. See
> + * dlb2_hw_map_qid() for more details.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> + * assigned a detailed error code from enum dlb2_error.
> + *
> + * Errors:
> + * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
> + *         the domain is not configured.
> + * EFAULT - Internal error (resp->status not set).
> + */
> +int dlb2_hw_unmap_qid(struct dlb2_hw *hw,
> +                     u32 domain_id,
> +                     struct dlb2_unmap_qid_args *args,
> +                     struct dlb2_cmd_response *resp,
> +                     bool vdev_request,
> +                     unsigned int vdev_id);
> +
> +/**
> + * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures
> + * @hw: dlb2_hw handle for a particular device.
> + *
> + * This function attempts to finish any outstanding unmap procedures.
> + * This function should be called by the kernel thread responsible for
> + * finishing map/unmap procedures.
> + *
> + * Return:
> + * Returns the number of procedures that weren't completed.
> + */
> +unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw);
> +
> +/**
> + * dlb2_finish_map_qid_procedures() - finish any pending map procedures
> + * @hw: dlb2_hw handle for a particular device.
> + *
> + * This function attempts to finish any outstanding map procedures.
> + * This function should be called by the kernel thread responsible for
> + * finishing map/unmap procedures.
> + *
> + * Return:
> + * Returns the number of procedures that weren't completed.
> + */
> +unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw);
> +
> +/**
> + * dlb2_hw_enable_ldb_port() - enable a load-balanced port for scheduling
> + * @hw: dlb2_hw handle for a particular device.
> + * @domain_id: domain ID.
> + * @args: port enable arguments.
> + * @resp: response structure.
> + * @vdev_request: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> + *
> + * This function configures the DLB to schedule QEs to a load-balanced port.
> + * Ports are enabled by default.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> + * assigned a detailed error code from enum dlb2_error.
> + *
> + * Errors:
> + * EINVAL - The port ID is invalid or the domain is not configured.
> + * EFAULT - Internal error (resp->status not set).
> + */
> +int dlb2_hw_enable_ldb_port(struct dlb2_hw *hw,
> +                           u32 domain_id,
> +                           struct dlb2_enable_ldb_port_args *args,
> +                           struct dlb2_cmd_response *resp,
> +                           bool vdev_request,
> +                           unsigned int vdev_id);
> +
> +/**
> + * dlb2_hw_disable_ldb_port() - disable a load-balanced port for scheduling
> + * @hw: dlb2_hw handle for a particular device.
> + * @domain_id: domain ID.
> + * @args: port disable arguments.
> + * @resp: response structure.
> + * @vdev_request: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> + *
> + * This function configures the DLB to stop scheduling QEs to a load-balanced
> + * port. Ports are enabled by default.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> + * assigned a detailed error code from enum dlb2_error.
> + *
> + * Errors:
> + * EINVAL - The port ID is invalid or the domain is not configured.
> + * EFAULT - Internal error (resp->status not set).
> + */
> +int dlb2_hw_disable_ldb_port(struct dlb2_hw *hw,
> +                            u32 domain_id,
> +                            struct dlb2_disable_ldb_port_args *args,
> +                            struct dlb2_cmd_response *resp,
> +                            bool vdev_request,
> +                            unsigned int vdev_id);
> +
> +/**
> + * dlb2_hw_enable_dir_port() - enable a directed port for scheduling
> + * @hw: dlb2_hw handle for a particular device.
> + * @domain_id: domain ID.
> + * @args: port enable arguments.
> + * @resp: response structure.
> + * @vdev_request: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> + *
> + * This function configures the DLB to schedule QEs to a directed port.
> + * Ports are enabled by default.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> + * assigned a detailed error code from enum dlb2_error.
> + *
> + * Errors:
> + * EINVAL - The port ID is invalid or the domain is not configured.
> + * EFAULT - Internal error (resp->status not set).
> + */
> +int dlb2_hw_enable_dir_port(struct dlb2_hw *hw,
> +                           u32 domain_id,
> +                           struct dlb2_enable_dir_port_args *args,
> +                           struct dlb2_cmd_response *resp,
> +                           bool vdev_request,
> +                           unsigned int vdev_id);
> +
> +/**
> + * dlb2_hw_disable_dir_port() - disable a directed port for scheduling
> + * @hw: dlb2_hw handle for a particular device.
> + * @domain_id: domain ID.
> + * @args: port disable arguments.
> + * @resp: response structure.
> + * @vdev_request: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> + *
> + * This function configures the DLB to stop scheduling QEs to a directed port.
> + * Ports are enabled by default.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> + * assigned a detailed error code from enum dlb2_error.
> + *
> + * Errors:
> + * EINVAL - The port ID is invalid or the domain is not configured.
> + * EFAULT - Internal error (resp->status not set).
> + */
> +int dlb2_hw_disable_dir_port(struct dlb2_hw *hw,
> +                            u32 domain_id,
> +                            struct dlb2_disable_dir_port_args *args,
> +                            struct dlb2_cmd_response *resp,
> +                            bool vdev_request,
> +                            unsigned int vdev_id);
> +
> +/**
> + * dlb2_configure_ldb_cq_interrupt() - configure load-balanced CQ for
> + *                                     interrupts
> + * @hw: dlb2_hw handle for a particular device.
> + * @port_id: load-balanced port ID.
> + * @vector: interrupt vector ID. Should be 0 for MSI or compressed MSI-X mode,
> + *         else a value up to 64.
> + * @mode: interrupt type (DLB2_CQ_ISR_MODE_MSI or DLB2_CQ_ISR_MODE_MSIX)
> + * @vf: If the port is VF-owned, the VF's ID. This is used for translating the
> + *     virtual port ID to a physical port ID. Ignored if mode is not MSI.
> + * @owner_vf: the VF to route the interrupt to. Ignore if mode is not MSI.
> + * @threshold: the minimum CQ depth at which the interrupt can fire. Must be
> + *     greater than 0.
> + *
> + * This function configures the DLB registers for load-balanced CQ's
> + * interrupts. This doesn't enable the CQ's interrupt; that can be done with
> + * dlb2_arm_cq_interrupt() or through an interrupt arm QE.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise.
> + *
> + * Errors:
> + * EINVAL - The port ID is invalid.
> + */
> +int dlb2_configure_ldb_cq_interrupt(struct dlb2_hw *hw,
> +                                   int port_id,
> +                                   int vector,
> +                                   int mode,
> +                                   unsigned int vf,
> +                                   unsigned int owner_vf,
> +                                   u16 threshold);
> +
> +/**
> + * dlb2_configure_dir_cq_interrupt() - configure directed CQ for interrupts
> + * @hw: dlb2_hw handle for a particular device.
> + * @port_id: load-balanced port ID.
> + * @vector: interrupt vector ID. Should be 0 for MSI or compressed MSI-X mode,
> + *         else a value up to 64.
> + * @mode: interrupt type (DLB2_CQ_ISR_MODE_MSI or DLB2_CQ_ISR_MODE_MSIX)
> + * @vf: If the port is VF-owned, the VF's ID. This is used for translating the
> + *     virtual port ID to a physical port ID. Ignored if mode is not MSI.
> + * @owner_vf: the VF to route the interrupt to. Ignore if mode is not MSI.
> + * @threshold: the minimum CQ depth at which the interrupt can fire. Must be
> + *     greater than 0.
> + *
> + * This function configures the DLB registers for directed CQ's interrupts.
> + * This doesn't enable the CQ's interrupt; that can be done with
> + * dlb2_arm_cq_interrupt() or through an interrupt arm QE.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise.
> + *
> + * Errors:
> + * EINVAL - The port ID is invalid.
> + */
> +int dlb2_configure_dir_cq_interrupt(struct dlb2_hw *hw,
> +                                   int port_id,
> +                                   int vector,
> +                                   int mode,
> +                                   unsigned int vf,
> +                                   unsigned int owner_vf,
> +                                   u16 threshold);
> +
> +/**
> + * dlb2_enable_ingress_error_alarms() - enable ingress error alarm interrupts
> + * @hw: dlb2_hw handle for a particular device.
> + */
> +void dlb2_enable_ingress_error_alarms(struct dlb2_hw *hw);
> +
> +/**
> + * dlb2_disable_ingress_error_alarms() - disable ingress error alarm interrupts
> + * @hw: dlb2_hw handle for a particular device.
> + */
> +void dlb2_disable_ingress_error_alarms(struct dlb2_hw *hw);
> +
> +/**
> + * dlb2_set_msix_mode() - enable certain hardware alarm interrupts
> + * @hw: dlb2_hw handle for a particular device.
> + * @mode: MSI-X mode (DLB2_MSIX_MODE_PACKED or DLB2_MSIX_MODE_COMPRESSED)
> + *
> + * This function configures the hardware to use either packed or compressed
> + * mode. This function should not be called if using MSI interrupts.
> + */
> +void dlb2_set_msix_mode(struct dlb2_hw *hw, int mode);
> +
> +/**
> + * dlb2_ack_msix_interrupt() - Ack an MSI-X interrupt
> + * @hw: dlb2_hw handle for a particular device.
> + * @vector: interrupt vector.
> + *
> + * Note: Only needed for PF service interrupts (vector 0). CQ interrupts are
> + * acked in dlb2_ack_compressed_cq_intr().
> + */
> +void dlb2_ack_msix_interrupt(struct dlb2_hw *hw, int vector);
> +
> +/**
> + * dlb2_arm_cq_interrupt() - arm a CQ's interrupt
> + * @hw: dlb2_hw handle for a particular device.
> + * @port_id: port ID
> + * @is_ldb: true for load-balanced port, false for a directed port
> + * @vdev_request: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> + *
> + * This function arms the CQ's interrupt. The CQ must be configured prior to
> + * calling this function.
> + *
> + * The function does no parameter validation; that is the caller's
> + * responsibility.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return: returns 0 upon success, <0 otherwise.
> + *
> + * EINVAL - Invalid port ID.
> + */
> +int dlb2_arm_cq_interrupt(struct dlb2_hw *hw,
> +                         int port_id,
> +                         bool is_ldb,
> +                         bool vdev_request,
> +                         unsigned int vdev_id);
> +
> +/**
> + * dlb2_read_compressed_cq_intr_status() - read compressed CQ interrupt status
> + * @hw: dlb2_hw handle for a particular device.
> + * @ldb_interrupts: 2-entry array of u32 bitmaps
> + * @dir_interrupts: 4-entry array of u32 bitmaps
> + *
> + * This function can be called from a compressed CQ interrupt handler to
> + * determine which CQ interrupts have fired. The caller should take appropriate
> + * (such as waking threads blocked on a CQ's interrupt) then ack the interrupts
> + * with dlb2_ack_compressed_cq_intr().
> + */
> +void dlb2_read_compressed_cq_intr_status(struct dlb2_hw *hw,
> +                                        u32 *ldb_interrupts,
> +                                        u32 *dir_interrupts);
> +
> +/**
> + * dlb2_ack_compressed_cq_intr_status() - ack compressed CQ interrupts
> + * @hw: dlb2_hw handle for a particular device.
> + * @ldb_interrupts: 2-entry array of u32 bitmaps
> + * @dir_interrupts: 4-entry array of u32 bitmaps
> + *
> + * This function ACKs compressed CQ interrupts. Its arguments should be the
> + * same ones passed to dlb2_read_compressed_cq_intr_status().
> + */
> +void dlb2_ack_compressed_cq_intr(struct dlb2_hw *hw,
> +                                u32 *ldb_interrupts,
> +                                u32 *dir_interrupts);
> +
> +/**
> + * dlb2_read_vf_intr_status() - read the VF interrupt status register
> + * @hw: dlb2_hw handle for a particular device.
> + *
> + * This function can be called from a VF's interrupt handler to determine
> + * which interrupts have fired. The first 31 bits correspond to CQ interrupt
> + * vectors, and the final bit is for the PF->VF mailbox interrupt vector.
> + *
> + * Return:
> + * Returns a bit vector indicating which interrupt vectors are active.
> + */
> +u32 dlb2_read_vf_intr_status(struct dlb2_hw *hw);
> +
> +/**
> + * dlb2_ack_vf_intr_status() - ack VF interrupts
> + * @hw: dlb2_hw handle for a particular device.
> + * @interrupts: 32-bit bitmap
> + *
> + * This function ACKs a VF's interrupts. Its interrupts argument should be the
> + * value returned by dlb2_read_vf_intr_status().
> + */
> +void dlb2_ack_vf_intr_status(struct dlb2_hw *hw, u32 interrupts);
> +
> +/**
> + * dlb2_ack_vf_msi_intr() - ack VF MSI interrupt
> + * @hw: dlb2_hw handle for a particular device.
> + * @interrupts: 32-bit bitmap
> + *
> + * This function clears the VF's MSI interrupt pending register. Its interrupts
> + * argument should be contain the MSI vectors to ACK. For example, if MSI MME
> + * is in mode 0, then one bit 0 should ever be set.
> + */
> +void dlb2_ack_vf_msi_intr(struct dlb2_hw *hw, u32 interrupts);
> +
> +/**
> + * dlb2_ack_pf_mbox_int() - ack PF->VF mailbox interrupt
> + * @hw: dlb2_hw handle for a particular device.
> + *
> + * When done processing the PF mailbox request, this function unsets
> + * the PF's mailbox ISR register.
> + */
> +void dlb2_ack_pf_mbox_int(struct dlb2_hw *hw);
> +
> +/**
> + * dlb2_read_vdev_to_pf_int_bitvec() - return a bit vector of all requesting
> + *                                     vdevs
> + * @hw: dlb2_hw handle for a particular device.
> + *
> + * When the vdev->PF ISR fires, this function can be called to determine which
> + * vdev(s) are requesting service. This bitvector must be passed to
> + * dlb2_ack_vdev_to_pf_int() when processing is complete for all requesting
> + * vdevs.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns a bit vector indicating which VFs (0-15) have requested service.
> + */
> +u32 dlb2_read_vdev_to_pf_int_bitvec(struct dlb2_hw *hw);
> +
> +/**
> + * dlb2_ack_vdev_mbox_int() - ack processed vdev->PF mailbox interrupt
> + * @hw: dlb2_hw handle for a particular device.
> + * @bitvec: bit vector returned by dlb2_read_vdev_to_pf_int_bitvec()
> + *
> + * When done processing all VF mailbox requests, this function unsets the VF's
> + * mailbox ISR register.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + */
> +void dlb2_ack_vdev_mbox_int(struct dlb2_hw *hw, u32 bitvec);
> +
> +/**
> + * dlb2_read_vf_flr_int_bitvec() - return a bit vector of all VFs requesting
> + *                                 FLR
> + * @hw: dlb2_hw handle for a particular device.
> + *
> + * When the VF FLR ISR fires, this function can be called to determine which
> + * VF(s) are requesting FLRs. This bitvector must passed to
> + * dlb2_ack_vf_flr_int() when processing is complete for all requesting VFs.
> + *
> + * Return:
> + * Returns a bit vector indicating which VFs (0-15) have requested FLRs.
> + */
> +u32 dlb2_read_vf_flr_int_bitvec(struct dlb2_hw *hw);
> +
> +/**
> + * dlb2_ack_vf_flr_int() - ack processed VF<->PF interrupt(s)
> + * @hw: dlb2_hw handle for a particular device.
> + * @bitvec: bit vector returned by dlb2_read_vf_flr_int_bitvec()
> + *
> + * When done processing all VF FLR requests, this function unsets the VF's FLR
> + * ISR register.
> + */
> +void dlb2_ack_vf_flr_int(struct dlb2_hw *hw, u32 bitvec);
> +
> +/**
> + * dlb2_ack_vdev_to_pf_int() - ack processed VF mbox and FLR interrupt(s)
> + * @hw: dlb2_hw handle for a particular device.
> + * @mbox_bitvec: bit vector returned by dlb2_read_vdev_to_pf_int_bitvec()
> + * @flr_bitvec: bit vector returned by dlb2_read_vf_flr_int_bitvec()
> + *
> + * When done processing all VF requests, this function communicates to the
> + * hardware that processing is complete.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + */
> +void dlb2_ack_vdev_to_pf_int(struct dlb2_hw *hw,
> +                            u32 mbox_bitvec,
> +                            u32 flr_bitvec);
> +
> +/**
> + * dlb2_process_wdt_interrupt() - process watchdog timer interrupts
> + * @hw: dlb2_hw handle for a particular device.
> + *
> + * This function reads the watchdog timer interrupt cause registers to
> + * determine which port(s) had a watchdog timeout, and notifies the
> + * application(s) that own the port(s).
> + */
> +void dlb2_process_wdt_interrupt(struct dlb2_hw *hw);
> +
> +/**
> + * dlb2_process_alarm_interrupt() - process an alarm interrupt
> + * @hw: dlb2_hw handle for a particular device.
> + *
> + * This function reads and logs the alarm syndrome, then acks the interrupt.
> + * This function should be called from the alarm interrupt handler when
> + * interrupt vector DLB2_INT_ALARM fires.
> + */
> +void dlb2_process_alarm_interrupt(struct dlb2_hw *hw);
> +
> +/**
> + * dlb2_process_ingress_error_interrupt() - process ingress error interrupts
> + * @hw: dlb2_hw handle for a particular device.
> + *
> + * This function reads the alarm syndrome, logs it, notifies user-space, and
> + * acks the interrupt. This function should be called from the alarm interrupt
> + * handler when interrupt vector DLB2_INT_INGRESS_ERROR fires.
> + *
> + * Return:
> + * Returns true if an ingress error interrupt occurred, false otherwise
> + */
> +bool dlb2_process_ingress_error_interrupt(struct dlb2_hw *hw);
> +
> +/**
> + * dlb2_get_group_sequence_numbers() - return a group's number of SNs per queue
> + * @hw: dlb2_hw handle for a particular device.
> + * @group_id: sequence number group ID.
> + *
> + * This function returns the configured number of sequence numbers per queue
> + * for the specified group.
> + *
> + * Return:
> + * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
> + */
> +int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw,
> +                                   unsigned int group_id);
> +
> +/**
> + * dlb2_get_group_sequence_number_occupancy() - return a group's in-use slots
> + * @hw: dlb2_hw handle for a particular device.
> + * @group_id: sequence number group ID.
> + *
> + * This function returns the group's number of in-use slots (i.e. load-balanced
> + * queues using the specified group).
> + *
> + * Return:
> + * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
> + */
> +int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw,
> +                                            unsigned int group_id);
> +
> +/**
> + * dlb2_set_group_sequence_numbers() - assign a group's number of SNs per queue
> + * @hw: dlb2_hw handle for a particular device.
> + * @group_id: sequence number group ID.
> + * @val: requested amount of sequence numbers per queue.
> + *
> + * This function configures the group's number of sequence numbers per queue.
> + * val can be a power-of-two between 32 and 1024, inclusive. This setting can
> + * be configured until the first ordered load-balanced queue is configured, at
> + * which point the configuration is locked.
> + *
> + * Return:
> + * Returns 0 upon success; -EINVAL if group_id or val is invalid, -EPERM if an
> + * ordered queue is configured.
> + */
> +int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
> +                                   unsigned int group_id,
> +                                   unsigned long val);
> +
> +/**
> + * dlb2_reset_domain() - reset a scheduling domain
> + * @hw: dlb2_hw handle for a particular device.
> + * @domain_id: domain ID.
> + * @vdev_request: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> + *
> + * This function resets and frees a DLB 2.0 scheduling domain and its associated
> + * resources.
> + *
> + * Pre-condition: the driver must ensure software has stopped sending QEs
> + * through this domain's producer ports before invoking this function, or
> + * undefined behavior will result.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, -1 otherwise.
> + *
> + * EINVAL - Invalid domain ID, or the domain is not configured.
> + * EFAULT - Internal error. (Possibly caused if software is the pre-condition
> + *         is not met.)
> + * ETIMEDOUT - Hardware component didn't reset in the expected time.
> + */
> +int dlb2_reset_domain(struct dlb2_hw *hw,
> +                     u32 domain_id,
> +                     bool vdev_request,
> +                     unsigned int vdev_id);
> +
> +/**
> + * dlb2_ldb_port_owned_by_domain() - query whether a port is owned by a domain
> + * @hw: dlb2_hw handle for a particular device.
> + * @domain_id: domain ID.
> + * @port_id: indicates whether this request came from a VF.
> + * @vdev_request: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> + *
> + * This function returns whether a load-balanced port is owned by a specified
> + * domain.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 if false, 1 if true, <0 otherwise.
> + *
> + * EINVAL - Invalid domain or port ID, or the domain is not configured.
> + */
> +int dlb2_ldb_port_owned_by_domain(struct dlb2_hw *hw,
> +                                 u32 domain_id,
> +                                 u32 port_id,
> +                                 bool vdev_request,
> +                                 unsigned int vdev_id);
> +
> +/**
> + * dlb2_dir_port_owned_by_domain() - query whether a port is owned by a domain
> + * @hw: dlb2_hw handle for a particular device.
> + * @domain_id: domain ID.
> + * @port_id: indicates whether this request came from a VF.
> + * @vdev_request: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> + *
> + * This function returns whether a directed port is owned by a specified
> + * domain.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 if false, 1 if true, <0 otherwise.
> + *
> + * EINVAL - Invalid domain or port ID, or the domain is not configured.
> + */
> +int dlb2_dir_port_owned_by_domain(struct dlb2_hw *hw,
> +                                 u32 domain_id,
> +                                 u32 port_id,
> +                                 bool vdev_request,
> +                                 unsigned int vdev_id);
> +
> +/**
> + * dlb2_hw_get_num_resources() - query the PCI function's available resources
> + * @hw: dlb2_hw handle for a particular device.
> + * @arg: pointer to resource counts.
> + * @vdev_request: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> + *
> + * This function returns the number of available resources for the PF or for a
> + * VF.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, -EINVAL if vdev_request is true and vdev_id is
> + * invalid.
> + */
> +int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
> +                             struct dlb2_get_num_resources_args *arg,
> +                             bool vdev_request,
> +                             unsigned int vdev_id);
> +
> +/**
> + * dlb2_hw_get_num_used_resources() - query the PCI function's used resources
> + * @hw: dlb2_hw handle for a particular device.
> + * @arg: pointer to resource counts.
> + * @vdev_request: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> + *
> + * This function returns the number of resources in use by the PF or a VF. It
> + * fills in the fields that args points to, except the following:
> + * - max_contiguous_atomic_inflights
> + * - max_contiguous_hist_list_entries
> + * - max_contiguous_ldb_credits
> + * - max_contiguous_dir_credits
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, -EINVAL if vdev_request is true and vdev_id is
> + * invalid.
> + */
> +int dlb2_hw_get_num_used_resources(struct dlb2_hw *hw,
> +                                  struct dlb2_get_num_resources_args *arg,
> +                                  bool vdev_request,
> +                                  unsigned int vdev_id);
> +
> +/**
> + * dlb2_send_async_vdev_to_pf_msg() - (vdev only) send a mailbox message to
> + *                                    the PF
> + * @hw: dlb2_hw handle for a particular device.
> + *
> + * This function sends a VF->PF mailbox message. It is asynchronous, so it
> + * returns once the message is sent but potentially before the PF has processed
> + * the message. The caller must call dlb2_vdev_to_pf_complete() to determine
> + * when the PF has finished processing the request.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + */
> +void dlb2_send_async_vdev_to_pf_msg(struct dlb2_hw *hw);
> +
> +/**
> + * dlb2_vdev_to_pf_complete() - check the status of an asynchronous mailbox
> + *                              request
> + * @hw: dlb2_hw handle for a particular device.
> + *
> + * This function returns a boolean indicating whether the PF has finished
> + * processing a VF->PF mailbox request. It should only be called after sending
> + * an asynchronous request with dlb2_send_async_vdev_to_pf_msg().
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + */
> +bool dlb2_vdev_to_pf_complete(struct dlb2_hw *hw);
> +
> +/**
> + * dlb2_vf_flr_complete() - check the status of a VF FLR
> + * @hw: dlb2_hw handle for a particular device.
> + *
> + * This function returns a boolean indicating whether the PF has finished
> + * executing the VF FLR. It should only be called after setting the VF's FLR
> + * bit.
> + */
> +bool dlb2_vf_flr_complete(struct dlb2_hw *hw);
> +
> +/**
> + * dlb2_send_async_pf_to_vdev_msg() - (PF only) send a mailbox message to a
> + *                                     vdev
> + * @hw: dlb2_hw handle for a particular device.
> + * @vdev_id: vdev ID.
> + *
> + * This function sends a PF->vdev mailbox message. It is asynchronous, so it
> + * returns once the message is sent but potentially before the vdev has
> + * processed the message. The caller must call dlb2_pf_to_vdev_complete() to
> + * determine when the vdev has finished processing the request.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + */
> +void dlb2_send_async_pf_to_vdev_msg(struct dlb2_hw *hw, unsigned int vdev_id);
> +
> +/**
> + * dlb2_pf_to_vdev_complete() - check the status of an asynchronous mailbox
> + *                            request
> + * @hw: dlb2_hw handle for a particular device.
> + * @vdev_id: vdev ID.
> + *
> + * This function returns a boolean indicating whether the vdev has finished
> + * processing a PF->vdev mailbox request. It should only be called after
> + * sending an asynchronous request with dlb2_send_async_pf_to_vdev_msg().
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + */
> +bool dlb2_pf_to_vdev_complete(struct dlb2_hw *hw, unsigned int vdev_id);
> +
> +/**
> + * dlb2_pf_read_vf_mbox_req() - (PF only) read a VF->PF mailbox request
> + * @hw: dlb2_hw handle for a particular device.
> + * @vf_id: VF ID.
> + * @data: pointer to message data.
> + * @len: size, in bytes, of the data array.
> + *
> + * This function copies one of the PF's VF->PF mailboxes into the array pointed
> + * to by data.
> + *
> + * Return:
> + * Returns 0 upon success, <0 otherwise.
> + *
> + * EINVAL - len >= DLB2_VF2PF_REQ_BYTES.
> + */
> +int dlb2_pf_read_vf_mbox_req(struct dlb2_hw *hw,
> +                            unsigned int vf_id,
> +                            void *data,
> +                            int len);
> +
> +/**
> + * dlb2_pf_read_vf_mbox_resp() - (PF only) read a VF->PF mailbox response
> + * @hw: dlb2_hw handle for a particular device.
> + * @vf_id: VF ID.
> + * @data: pointer to message data.
> + * @len: size, in bytes, of the data array.
> + *
> + * This function copies one of the PF's VF->PF mailboxes into the array pointed
> + * to by data.
> + *
> + * Return:
> + * Returns 0 upon success, <0 otherwise.
> + *
> + * EINVAL - len >= DLB2_VF2PF_RESP_BYTES.
> + */
> +int dlb2_pf_read_vf_mbox_resp(struct dlb2_hw *hw,
> +                             unsigned int vf_id,
> +                             void *data,
> +                             int len);
> +
> +/**
> + * dlb2_pf_write_vf_mbox_resp() - (PF only) write a PF->VF mailbox response
> + * @hw: dlb2_hw handle for a particular device.
> + * @vf_id: VF ID.
> + * @data: pointer to message data.
> + * @len: size, in bytes, of the data array.
> + *
> + * This function copies the user-provided message data into of the PF's VF->PF
> + * mailboxes.
> + *
> + * Return:
> + * Returns 0 upon success, <0 otherwise.
> + *
> + * EINVAL - len >= DLB2_PF2VF_RESP_BYTES.
> + */
> +int dlb2_pf_write_vf_mbox_resp(struct dlb2_hw *hw,
> +                              unsigned int vf_id,
> +                              void *data,
> +                              int len);
> +
> +/**
> + * dlb2_pf_write_vf_mbox_req() - (PF only) write a PF->VF mailbox request
> + * @hw: dlb2_hw handle for a particular device.
> + * @vf_id: VF ID.
> + * @data: pointer to message data.
> + * @len: size, in bytes, of the data array.
> + *
> + * This function copies the user-provided message data into of the PF's VF->PF
> + * mailboxes.
> + *
> + * Return:
> + * Returns 0 upon success, <0 otherwise.
> + *
> + * EINVAL - len >= DLB2_PF2VF_REQ_BYTES.
> + */
> +int dlb2_pf_write_vf_mbox_req(struct dlb2_hw *hw,
> +                             unsigned int vf_id,
> +                             void *data,
> +                             int len);
> +
> +/**
> + * dlb2_vf_read_pf_mbox_resp() - (VF only) read a PF->VF mailbox response
> + * @hw: dlb2_hw handle for a particular device.
> + * @data: pointer to message data.
> + * @len: size, in bytes, of the data array.
> + *
> + * This function copies the VF's PF->VF mailbox into the array pointed to by
> + * data.
> + *
> + * Return:
> + * Returns 0 upon success, <0 otherwise.
> + *
> + * EINVAL - len >= DLB2_PF2VF_RESP_BYTES.
> + */
> +int dlb2_vf_read_pf_mbox_resp(struct dlb2_hw *hw, void *data, int len);
> +
> +/**
> + * dlb2_vf_read_pf_mbox_req() - (VF only) read a PF->VF mailbox request
> + * @hw: dlb2_hw handle for a particular device.
> + * @data: pointer to message data.
> + * @len: size, in bytes, of the data array.
> + *
> + * This function copies the VF's PF->VF mailbox into the array pointed to by
> + * data.
> + *
> + * Return:
> + * Returns 0 upon success, <0 otherwise.
> + *
> + * EINVAL - len >= DLB2_PF2VF_REQ_BYTES.
> + */
> +int dlb2_vf_read_pf_mbox_req(struct dlb2_hw *hw, void *data, int len);
> +
> +/**
> + * dlb2_vf_write_pf_mbox_req() - (VF only) write a VF->PF mailbox request
> + * @hw: dlb2_hw handle for a particular device.
> + * @data: pointer to message data.
> + * @len: size, in bytes, of the data array.
> + *
> + * This function copies the user-provided message data into of the VF's PF->VF
> + * mailboxes.
> + *
> + * Return:
> + * Returns 0 upon success, <0 otherwise.
> + *
> + * EINVAL - len >= DLB2_VF2PF_REQ_BYTES.
> + */
> +int dlb2_vf_write_pf_mbox_req(struct dlb2_hw *hw, void *data, int len);
> +
> +/**
> + * dlb2_vf_write_pf_mbox_resp() - (VF only) write a VF->PF mailbox response
> + * @hw: dlb2_hw handle for a particular device.
> + * @data: pointer to message data.
> + * @len: size, in bytes, of the data array.
> + *
> + * This function copies the user-provided message data into of the VF's PF->VF
> + * mailboxes.
> + *
> + * Return:
> + * Returns 0 upon success, <0 otherwise.
> + *
> + * EINVAL - len >= DLB2_VF2PF_RESP_BYTES.
> + */
> +int dlb2_vf_write_pf_mbox_resp(struct dlb2_hw *hw, void *data, int len);
> +
> +/**
> + * dlb2_reset_vdev() - reset the hardware owned by a virtual device
> + * @hw: dlb2_hw handle for a particular device.
> + * @id: virtual device ID
> + *
> + * This function resets the hardware owned by a vdev, by resetting the vdev's
> + * domains one by one.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + */
> +int dlb2_reset_vdev(struct dlb2_hw *hw, unsigned int id);
> +
> +/**
> + * dlb2_vdev_is_locked() - check whether the vdev's resources are locked
> + * @hw: dlb2_hw handle for a particular device.
> + * @id: virtual device ID
> + *
> + * This function returns whether or not the vdev's resource assignments are
> + * locked. If locked, no resources can be added to or subtracted from the
> + * group.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + */
> +bool dlb2_vdev_is_locked(struct dlb2_hw *hw, unsigned int id);
> +
> +/**
> + * dlb2_lock_vdev() - lock the vdev's resources
> + * @hw: dlb2_hw handle for a particular device.
> + * @id: virtual device ID
> + *
> + * This function sets a flag indicating that the vdev is using its resources.
> + * When the vdev is locked, its resource assignment cannot be changed.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + */
> +void dlb2_lock_vdev(struct dlb2_hw *hw, unsigned int id);
> +
> +/**
> + * dlb2_unlock_vdev() - unlock the vdev's resources
> + * @hw: dlb2_hw handle for a particular device.
> + * @id: virtual device ID
> + *
> + * This function unlocks the vdev's resource assignment, allowing it to be
> + * modified.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + */
> +void dlb2_unlock_vdev(struct dlb2_hw *hw, unsigned int id);
> +
> +/**
> + * dlb2_update_vdev_sched_domains() - update the domains assigned to a vdev
> + * @hw: dlb2_hw handle for a particular device.
> + * @id: virtual device ID
> + * @num: number of scheduling domains to assign to this vdev
> + *
> + * This function assigns num scheduling domains to the specified vdev. If the
> + * vdev already has domains assigned, this existing assignment is adjusted
> + * accordingly.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, <0 otherwise.
> + *
> + * Errors:
> + * EINVAL - id is invalid, or the requested number of resources are
> + *         unavailable.
> + * EPERM  - The vdev's resource assignment is locked and cannot be changed.
> + */
> +int dlb2_update_vdev_sched_domains(struct dlb2_hw *hw, u32 id, u32 num);
> +
> +/**
> + * dlb2_update_vdev_ldb_queues() - update the LDB queues assigned to a vdev
> + * @hw: dlb2_hw handle for a particular device.
> + * @id: virtual device ID
> + * @num: number of LDB queues to assign to this vdev
> + *
> + * This function assigns num LDB queues to the specified vdev. If the vdev
> + * already has LDB queues assigned, this existing assignment is adjusted
> + * accordingly.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, <0 otherwise.
> + *
> + * Errors:
> + * EINVAL - id is invalid, or the requested number of resources are
> + *         unavailable.
> + * EPERM  - The vdev's resource assignment is locked and cannot be changed.
> + */
> +int dlb2_update_vdev_ldb_queues(struct dlb2_hw *hw, u32 id, u32 num);
> +
> +/**
> + * dlb2_update_vdev_ldb_ports() - update the LDB ports assigned to a vdev
> + * @hw: dlb2_hw handle for a particular device.
> + * @id: virtual device ID
> + * @num: number of LDB ports to assign to this vdev
> + *
> + * This function assigns num LDB ports to the specified vdev. If the vdev
> + * already has LDB ports assigned, this existing assignment is adjusted
> + * accordingly.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, <0 otherwise.
> + *
> + * Errors:
> + * EINVAL - id is invalid, or the requested number of resources are
> + *         unavailable.
> + * EPERM  - The vdev's resource assignment is locked and cannot be changed.
> + */
> +int dlb2_update_vdev_ldb_ports(struct dlb2_hw *hw, u32 id, u32 num);
> +
> +/**
> + * dlb2_update_vdev_ldb_cos_ports() - update the LDB ports assigned to a vdev
> + * @hw: dlb2_hw handle for a particular device.
> + * @id: virtual device ID
> + * @cos: class-of-service ID
> + * @num: number of LDB ports to assign to this vdev
> + *
> + * This function assigns num LDB ports from class-of-service cos to the
> + * specified vdev. If the vdev already has LDB ports from this class-of-service
> + * assigned, this existing assignment is adjusted accordingly.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, <0 otherwise.
> + *
> + * Errors:
> + * EINVAL - id is invalid, or the requested number of resources are
> + *         unavailable.
> + * EPERM  - The vdev's resource assignment is locked and cannot be changed.
> + */
> +int dlb2_update_vdev_ldb_cos_ports(struct dlb2_hw *hw,
> +                                  u32 id,
> +                                  u32 cos,
> +                                  u32 num);
> +
> +/**
> + * dlb2_update_vdev_dir_ports() - update the DIR ports assigned to a vdev
> + * @hw: dlb2_hw handle for a particular device.
> + * @id: virtual device ID
> + * @num: number of DIR ports to assign to this vdev
> + *
> + * This function assigns num DIR ports to the specified vdev. If the vdev
> + * already has DIR ports assigned, this existing assignment is adjusted
> + * accordingly.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, <0 otherwise.
> + *
> + * Errors:
> + * EINVAL - id is invalid, or the requested number of resources are
> + *         unavailable.
> + * EPERM  - The vdev's resource assignment is locked and cannot be changed.
> + */
> +int dlb2_update_vdev_dir_ports(struct dlb2_hw *hw, u32 id, u32 num);
> +
> +/**
> + * dlb2_update_vdev_ldb_credits() - update the vdev's assigned LDB credits
> + * @hw: dlb2_hw handle for a particular device.
> + * @id: virtual device ID
> + * @num: number of LDB credit credits to assign to this vdev
> + *
> + * This function assigns num LDB credit to the specified vdev. If the vdev
> + * already has LDB credits assigned, this existing assignment is adjusted
> + * accordingly. vdevs are assigned a contiguous chunk of credits, so this
> + * function may fail if a sufficiently large contiguous chunk is not available.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, <0 otherwise.
> + *
> + * Errors:
> + * EINVAL - id is invalid, or the requested number of resources are
> + *         unavailable.
> + * EPERM  - The vdev's resource assignment is locked and cannot be changed.
> + */
> +int dlb2_update_vdev_ldb_credits(struct dlb2_hw *hw, u32 id, u32 num);
> +
> +/**
> + * dlb2_update_vdev_dir_credits() - update the vdev's assigned DIR credits
> + * @hw: dlb2_hw handle for a particular device.
> + * @id: virtual device ID
> + * @num: number of DIR credits to assign to this vdev
> + *
> + * This function assigns num DIR credit to the specified vdev. If the vdev
> + * already has DIR credits assigned, this existing assignment is adjusted
> + * accordingly. vdevs are assigned a contiguous chunk of credits, so this
> + * function may fail if a sufficiently large contiguous chunk is not available.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, <0 otherwise.
> + *
> + * Errors:
> + * EINVAL - id is invalid, or the requested number of resources are
> + *         unavailable.
> + * EPERM  - The vdev's resource assignment is locked and cannot be changed.
> + */
> +int dlb2_update_vdev_dir_credits(struct dlb2_hw *hw, u32 id, u32 num);
> +
> +/**
> + * dlb2_update_vdev_hist_list_entries() - update the vdev's assigned HL entries
> + * @hw: dlb2_hw handle for a particular device.
> + * @id: virtual device ID
> + * @num: number of history list entries to assign to this vdev
> + *
> + * This function assigns num history list entries to the specified vdev. If the
> + * vdev already has history list entries assigned, this existing assignment is
> + * adjusted accordingly. vdevs are assigned a contiguous chunk of entries, so
> + * this function may fail if a sufficiently large contiguous chunk is not
> + * available.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, <0 otherwise.
> + *
> + * Errors:
> + * EINVAL - id is invalid, or the requested number of resources are
> + *         unavailable.
> + * EPERM  - The vdev's resource assignment is locked and cannot be changed.
> + */
> +int dlb2_update_vdev_hist_list_entries(struct dlb2_hw *hw, u32 id, u32 num);
> +
> +/**
> + * dlb2_update_vdev_atomic_inflights() - update the vdev's atomic inflights
> + * @hw: dlb2_hw handle for a particular device.
> + * @id: virtual device ID
> + * @num: number of atomic inflights to assign to this vdev
> + *
> + * This function assigns num atomic inflights to the specified vdev. If the vdev
> + * already has atomic inflights assigned, this existing assignment is adjusted
> + * accordingly. vdevs are assigned a contiguous chunk of entries, so this
> + * function may fail if a sufficiently large contiguous chunk is not available.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, <0 otherwise.
> + *
> + * Errors:
> + * EINVAL - id is invalid, or the requested number of resources are
> + *         unavailable.
> + * EPERM  - The vdev's resource assignment is locked and cannot be changed.
> + */
> +int dlb2_update_vdev_atomic_inflights(struct dlb2_hw *hw, u32 id, u32 num);
> +
> +/**
> + * dlb2_reset_vdev_resources() - reassign the vdev's resources to the PF
> + * @hw: dlb2_hw handle for a particular device.
> + * @id: virtual device ID
> + *
> + * This function takes any resources currently assigned to the vdev and
> + * reassigns them to the PF.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, <0 otherwise.
> + *
> + * Errors:
> + * EINVAL - id is invalid
> + * EPERM  - The vdev's resource assignment is locked and cannot be changed.
> + */
> +int dlb2_reset_vdev_resources(struct dlb2_hw *hw, unsigned int id);
> +
> +/**
> + * dlb2_notify_vf() - send an alarm to a VF
> + * @hw: dlb2_hw handle for a particular device.
> + * @vf_id: VF ID
> + * @notification: notification
> + *
> + * This function sends a notification (as defined in dlb2_mbox.h) to a VF.
> + *
> + * Return:
> + * Returns 0 upon success, <0 if the VF doesn't ACK the PF->VF interrupt.
> + */
> +int dlb2_notify_vf(struct dlb2_hw *hw,
> +                  unsigned int vf_id,
> +                  u32 notification);
> +
> +/**
> + * dlb2_vdev_in_use() - query whether a virtual device is in use
> + * @hw: dlb2_hw handle for a particular device.
> + * @id: virtual device ID
> + *
> + * This function sends a mailbox request to the vdev to query whether the vdev
> + * is in use.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 for false, 1 for true, and <0 if the mailbox request times out or
> + * an internal error occurs.
> + */
> +int dlb2_vdev_in_use(struct dlb2_hw *hw, unsigned int id);
> +
> +/**
> + * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
> + * @hw: dlb2_hw handle for a particular device.
> + *
> + * Clearing the PMCSR must be done at initialization to make the device fully
> + * operational.
> + */
> +void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw);
> +
> +/**
> + * dlb2_hw_get_ldb_queue_depth() - returns the depth of a load-balanced queue
> + * @hw: dlb2_hw handle for a particular device.
> + * @domain_id: domain ID.
> + * @args: queue depth args
> + * @resp: response structure.
> + * @vdev_request: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> + *
> + * This function returns the depth of a load-balanced queue.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> + * assigned a detailed error code from enum dlb2_error. If successful, resp->id
> + * contains the depth.
> + *
> + * Errors:
> + * EINVAL - Invalid domain ID or queue ID.
> + */
> +int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
> +                               u32 domain_id,
> +                               struct dlb2_get_ldb_queue_depth_args *args,
> +                               struct dlb2_cmd_response *resp,
> +                               bool vdev_request,
> +                               unsigned int vdev_id);
> +
> +/**
> + * dlb2_hw_get_dir_queue_depth() - returns the depth of a directed queue
> + * @hw: dlb2_hw handle for a particular device.
> + * @domain_id: domain ID.
> + * @args: queue depth args
> + * @resp: response structure.
> + * @vdev_request: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> + *
> + * This function returns the depth of a directed queue.
> + *
> + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> + * device.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> + * assigned a detailed error code from enum dlb2_error. If successful, resp->id
> + * contains the depth.
> + *
> + * Errors:
> + * EINVAL - Invalid domain ID or queue ID.
> + */
> +int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
> +                               u32 domain_id,
> +                               struct dlb2_get_dir_queue_depth_args *args,
> +                               struct dlb2_cmd_response *resp,
> +                               bool vdev_request,
> +                               unsigned int vdev_id);
> +
> +enum dlb2_virt_mode {
> +       DLB2_VIRT_NONE,
> +       DLB2_VIRT_SRIOV,
> +       DLB2_VIRT_SIOV,
> +
> +       /* NUM_DLB2_VIRT_MODES must be last */
> +       NUM_DLB2_VIRT_MODES,
> +};
> +
> +/**
> + * dlb2_hw_set_virt_mode() - set the device's virtualization mode
> + * @hw: dlb2_hw handle for a particular device.
> + * @mode: either none, SR-IOV, or Scalable IOV.
> + *
> + * This function sets the virtualization mode of the device. This controls
> + * whether the device uses a software or hardware mailbox.
> + *
> + * This should be called by the PF driver when either SR-IOV or Scalable IOV is
> + * selected as the virtualization mechanism, and by the VF/VDEV driver during
> + * initialization after recognizing itself as an SR-IOV or Scalable IOV device.
> + *
> + * Errors:
> + * EINVAL - Invalid mode.
> + */
> +int dlb2_hw_set_virt_mode(struct dlb2_hw *hw, enum dlb2_virt_mode mode);
> +
> +/**
> + * dlb2_hw_get_virt_mode() - get the device's virtualization mode
> + * @hw: dlb2_hw handle for a particular device.
> + *
> + * This function gets the virtualization mode of the device.
> + */
> +enum dlb2_virt_mode dlb2_hw_get_virt_mode(struct dlb2_hw *hw);
> +
> +/**
> + * dlb2_hw_get_ldb_port_phys_id() - get a physical port ID from its virt ID
> + * @hw: dlb2_hw handle for a particular device.
> + * @id: virtual port ID.
> + * @vdev_id: vdev ID.
> + *
> + * Return:
> + * Returns >= 0 upon success, -1 otherwise.
> + */
> +s32 dlb2_hw_get_ldb_port_phys_id(struct dlb2_hw *hw,
> +                                u32 id,
> +                                unsigned int vdev_id);
> +
> +/**
> + * dlb2_hw_get_dir_port_phys_id() - get a physical port ID from its virt ID
> + * @hw: dlb2_hw handle for a particular device.
> + * @id: virtual port ID.
> + * @vdev_id: vdev ID.
> + *
> + * Return:
> + * Returns >= 0 upon success, -1 otherwise.
> + */
> +s32 dlb2_hw_get_dir_port_phys_id(struct dlb2_hw *hw,
> +                                u32 id,
> +                                unsigned int vdev_id);
> +
> +/**
> + * dlb2_hw_register_sw_mbox() - register a software mailbox
> + * @hw: dlb2_hw handle for a particular device.
> + * @vdev_id: vdev ID.
> + * @vdev2pf_mbox: pointer to a 4KB memory page used for vdev->PF communication.
> + * @pf2vdev_mbox: pointer to a 4KB memory page used for PF->vdev communication.
> + * @pf2vdev_inject: callback function for injecting a PF->vdev interrupt.
> + * @inject_arg: user argument for pf2vdev_inject callback.
> + *
> + * When Scalable IOV is enabled, the VDCM must register a software mailbox for
> + * every virtual device during vdev creation.
> + *
> + * This function notifies the driver to use a software mailbox using the
> + * provided pointers, instead of the device's hardware mailbox. When the driver
> + * calls mailbox functions like dlb2_pf_write_vf_mbox_req(), the request will
> + * go to the software mailbox instead of the hardware one. This is used in
> + * Scalable IOV virtualization.
> + */
> +void dlb2_hw_register_sw_mbox(struct dlb2_hw *hw,
> +                             unsigned int vdev_id,
> +                             u32 *vdev2pf_mbox,
> +                             u32 *pf2vdev_mbox,
> +                             void (*pf2vdev_inject)(void *),
> +                             void *inject_arg);
> +
> +/**
> + * dlb2_hw_unregister_sw_mbox() - unregister a software mailbox
> + * @hw: dlb2_hw handle for a particular device.
> + * @vdev_id: vdev ID.
> + *
> + * This function notifies the driver to stop using a previously registered
> + * software mailbox.
> + */
> +void dlb2_hw_unregister_sw_mbox(struct dlb2_hw *hw, unsigned int vdev_id);
> +
> +/**
> + * dlb2_hw_setup_cq_ims_entry() - setup a CQ's IMS entry
> + * @hw: dlb2_hw handle for a particular device.
> + * @vdev_id: vdev ID.
> + * @virt_cq_id: virtual CQ ID.
> + * @is_ldb: CQ is load-balanced.
> + * @addr_lo: least-significant 32 bits of address.
> + * @data: 32 data bits.
> + *
> + * This sets up the CQ's IMS entry with the provided address and data values.
> + * This function should only be called if the device is configured for Scalable
> + * IOV virtualization. The upper 32 address bits are fixed in hardware and thus
> + * not needed.
> + */
> +void dlb2_hw_setup_cq_ims_entry(struct dlb2_hw *hw,
> +                               unsigned int vdev_id,
> +                               u32 virt_cq_id,
> +                               bool is_ldb,
> +                               u32 addr_lo,
> +                               u32 data);
> +
> +/**
> + * dlb2_hw_clear_cq_ims_entry() - clear a CQ's IMS entry
> + * @hw: dlb2_hw handle for a particular device.
> + * @vdev_id: vdev ID.
> + * @virt_cq_id: virtual CQ ID.
> + * @is_ldb: CQ is load-balanced.
> + *
> + * This clears the CQ's IMS entry, reverting it to its reset state.
> + */
> +void dlb2_hw_clear_cq_ims_entry(struct dlb2_hw *hw,
> +                               unsigned int vdev_id,
> +                               u32 virt_cq_id,
> +                               bool is_ldb);
> +
> +/**
> + * dlb2_hw_register_pasid() - register a vdev's PASID
> + * @hw: dlb2_hw handle for a particular device.
> + * @vdev_id: vdev ID.
> + * @pasid: the vdev's PASID.
> + *
> + * This function stores the user-supplied PASID, and uses it when configuring
> + * the vdev's CQs.
> + *
> + * Return:
> + * Returns >= 0 upon success, -1 otherwise.
> + */
> +int dlb2_hw_register_pasid(struct dlb2_hw *hw,
> +                          unsigned int vdev_id,
> +                          unsigned int pasid);
> +
> +/**
> + * dlb2_hw_pending_port_unmaps() - returns the number of unmap operations in
> + *     progress.
> + * @hw: dlb2_hw handle for a particular device.
> + * @domain_id: domain ID.
> + * @args: number of unmaps in progress args
> + * @resp: response structure.
> + * @vf_request: indicates whether this request came from a VF.
> + * @vf_id: If vf_request is true, this contains the VF's ID.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> + * assigned a detailed error code from enum dlb2_error. If successful, resp->id
> + * contains the number of unmaps in progress.
> + *
> + * Errors:
> + * EINVAL - Invalid port ID.
> + */
> +int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,
> +                               u32 domain_id,
> +                               struct dlb2_pending_port_unmaps_args *args,
> +                               struct dlb2_cmd_response *resp,
> +                               bool vf_request,
> +                               unsigned int vf_id);
> +
> +/**
> + * dlb2_hw_get_cos_bandwidth() - returns the percent of bandwidth allocated
> + *     to a port class-of-service.
> + * @hw: dlb2_hw handle for a particular device.
> + * @cos_id: class-of-service ID.
> + *
> + * Return:
> + * Returns -EINVAL if cos_id is invalid, else the class' bandwidth allocation.
> + */
> +int dlb2_hw_get_cos_bandwidth(struct dlb2_hw *hw, u32 cos_id);
> +
> +/**
> + * dlb2_hw_set_cos_bandwidth() - set a bandwidth allocation percentage for a
> + *     port class-of-service.
> + * @hw: dlb2_hw handle for a particular device.
> + * @cos_id: class-of-service ID.
> + * @bandwidth: class-of-service bandwidth.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise.
> + *
> + * Errors:
> + * EINVAL - Invalid cos ID, bandwidth is greater than 100, or bandwidth would
> + *         cause the total bandwidth across all classes of service to exceed
> + *         100%.
> + */
> +int dlb2_hw_set_cos_bandwidth(struct dlb2_hw *hw, u32 cos_id, u8 bandwidth);
> +
> +enum dlb2_wd_tmo {
> +       /* 40s watchdog timeout */
> +       DLB2_WD_TMO_40S,
> +       /* 10s watchdog timeout */
> +       DLB2_WD_TMO_10S,
> +       /* 1s watchdog timeout */
> +       DLB2_WD_TMO_1S,
> +
> +       /* Must be last */
> +       NUM_DLB2_WD_TMOS,
> +};
> +
> +/**
> + * dlb2_hw_enable_wd_timer() - enable the CQ watchdog timers with a
> + *     caller-specified timeout.
> + * @hw: dlb2_hw handle for a particular device.
> + * @tmo: watchdog timeout.
> + *
> + * This function should be called during device initialization and after reset.
> + * The watchdog timer interrupt must also be enabled per-CQ, using either
> + * dlb2_hw_enable_dir_cq_wd_int() or dlb2_hw_enable_ldb_cq_wd_int().
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise.
> + *
> + * Errors:
> + * EINVAL - Invalid timeout.
> + */
> +int dlb2_hw_enable_wd_timer(struct dlb2_hw *hw, enum dlb2_wd_tmo tmo);
> +
> +/**
> + * dlb2_hw_enable_dir_cq_wd_int() - enable the CQ watchdog interrupt on an
> + *     individual CQ.
> + * @hw: dlb2_hw handle for a particular device.
> + * @id: port ID.
> + * @vdev_req: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise.
> + *
> + * Errors:
> + * EINVAL - Invalid directed port ID.
> + */
> +int dlb2_hw_enable_dir_cq_wd_int(struct dlb2_hw *hw,
> +                                u32 id,
> +                                bool vdev_req,
> +                                unsigned int vdev_id);
> +
> +/**
> + * dlb2_hw_enable_ldb_cq_wd_int() - enable the CQ watchdog interrupt on an
> + *     individual CQ.
> + * @hw: dlb2_hw handle for a particular device.
> + * @id: port ID.
> + * @vdev_req: indicates whether this request came from a vdev.
> + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> + *
> + * Return:
> + * Returns 0 upon success, < 0 otherwise.
> + *
> + * Errors:
> + * EINVAL - Invalid load-balanced port ID.
> + */
> +int dlb2_hw_enable_ldb_cq_wd_int(struct dlb2_hw *hw,
> +                                u32 id,
> +                                bool vdev_req,
> +                                unsigned int vdev_id);
> +
> +/**
> + * dlb2_hw_enable_sparse_ldb_cq_mode() - enable sparse mode for load-balanced
> + *     ports.
> + * @hw: dlb2_hw handle for a particular device.
> + *
> + * This function must be called prior to configuring scheduling domains.
> + */
> +void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw);
> +
> +/**
> + * dlb2_hw_enable_sparse_dir_cq_mode() - enable sparse mode for directed ports.
> + * @hw: dlb2_hw handle for a particular device.
> + *
> + * This function must be called prior to configuring scheduling domains.
> + */
> +void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw);
> +
> +/**
> + * dlb2_hw_set_qe_arbiter_weights() - program QE arbiter weights
> + * @hw: dlb2_hw handle for a particular device.
> + * @weight: 8-entry array of arbiter weights.
> + *
> + * weight[N] programs priority N's weight. In cases where the 8 priorities are
> + * reduced to 4 bins, the mapping is:
> + * - weight[1] programs bin 0
> + * - weight[3] programs bin 1
> + * - weight[5] programs bin 2
> + * - weight[7] programs bin 3
> + */
> +void dlb2_hw_set_qe_arbiter_weights(struct dlb2_hw *hw, u8 weight[8]);
> +
> +/**
> + * dlb2_hw_set_qid_arbiter_weights() - program QID arbiter weights
> + * @hw: dlb2_hw handle for a particular device.
> + * @weight: 8-entry array of arbiter weights.
> + *
> + * weight[N] programs priority N's weight. In cases where the 8 priorities are
> + * reduced to 4 bins, the mapping is:
> + * - weight[1] programs bin 0
> + * - weight[3] programs bin 1
> + * - weight[5] programs bin 2
> + * - weight[7] programs bin 3
> + */
> +void dlb2_hw_set_qid_arbiter_weights(struct dlb2_hw *hw, u8 weight[8]);
> +
> +/**
> + * dlb2_hw_ldb_cq_interrupt_enabled() - Check if the interrupt is enabled
> + * @hw: dlb2_hw handle for a particular device.
> + * @port_id: physical load-balanced port ID.
> + *
> + * This function returns whether the load-balanced CQ interrupt is enabled.
> + */
> +int dlb2_hw_ldb_cq_interrupt_enabled(struct dlb2_hw *hw, int port_id);
> +
> +/**
> + * dlb2_hw_ldb_cq_interrupt_set_mode() - Program the CQ interrupt mode
> + * @hw: dlb2_hw handle for a particular device.
> + * @port_id: physical load-balanced port ID.
> + * @mode: interrupt type (DLB2_CQ_ISR_MODE_{DIS, MSI, MSIX, ADI})
> + *
> + * This function can be used to disable (MODE_DIS) and re-enable the
> + * load-balanced CQ's interrupt. It should only be called after the interrupt
> + * has been configured with dlb2_configure_ldb_cq_interrupt().
> + */
> +void dlb2_hw_ldb_cq_interrupt_set_mode(struct dlb2_hw *hw,
> +                                      int port_id,
> +                                      int mode);
> +
> +/**
> + * dlb2_hw_dir_cq_interrupt_enabled() - Check if the interrupt is enabled
> + * @hw: dlb2_hw handle for a particular device.
> + * @port_id: physical load-balanced port ID.
> + *
> + * This function returns whether the load-balanced CQ interrupt is enabled.
> + */
> +int dlb2_hw_dir_cq_interrupt_enabled(struct dlb2_hw *hw, int port_id);
> +
> +/**
> + * dlb2_hw_dir_cq_interrupt_set_mode() - Program the CQ interrupt mode
> + * @hw: dlb2_hw handle for a particular device.
> + * @port_id: physical directed port ID.
> + * @mode: interrupt type (DLB2_CQ_ISR_MODE_{DIS, MSI, MSIX, ADI})
> + *
> + * This function can be used to disable (MODE_DIS) and re-enable the
> + * directed CQ's interrupt. It should only be called after the interrupt
> + * has been configured with dlb2_configure_dir_cq_interrupt().
> + */
> +void dlb2_hw_dir_cq_interrupt_set_mode(struct dlb2_hw *hw,
> +                                      int port_id,
> +                                      int mode);
> +
> +#endif /* __DLB2_RESOURCE_H */
> diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
> new file mode 100644
> index 0000000..bd8590d
> --- /dev/null
> +++ b/drivers/event/dlb2/pf/dlb2_main.c
> @@ -0,0 +1,615 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2016-2020 Intel Corporation
> + */
> +
> +#include <stdint.h>
> +#include <stdbool.h>
> +#include <stdio.h>
> +#include <errno.h>
> +#include <assert.h>
> +#include <unistd.h>
> +#include <string.h>
> +
> +#include <rte_malloc.h>
> +#include <rte_errno.h>
> +
> +#include "base/dlb2_resource.h"
> +#include "base/dlb2_osdep.h"
> +#include "base/dlb2_regs.h"
> +#include "dlb2_main.h"
> +#include "../dlb2_user.h"
> +#include "../dlb2_priv.h"
> +#include "../dlb2_iface.h"
> +#include "../dlb2_inline_fns.h"
> +
> +#define PF_ID_ZERO 0   /* PF ONLY! */
> +#define NO_OWNER_VF 0  /* PF ONLY! */
> +#define NOT_VF_REQ false /* PF ONLY! */
> +
> +#define DLB2_PCI_CFG_SPACE_SIZE 256
> +#define DLB2_PCI_CAP_POINTER 0x34
> +#define DLB2_PCI_CAP_NEXT(hdr) (((hdr) >> 8) & 0xFC)
> +#define DLB2_PCI_CAP_ID(hdr) ((hdr) & 0xFF)
> +#define DLB2_PCI_EXT_CAP_NEXT(hdr) (((hdr) >> 20) & 0xFFC)
> +#define DLB2_PCI_EXT_CAP_ID(hdr) ((hdr) & 0xFFFF)
> +#define DLB2_PCI_EXT_CAP_ID_ERR 1
> +#define DLB2_PCI_ERR_UNCOR_MASK 8
> +#define DLB2_PCI_ERR_UNC_UNSUP  0x00100000
> +
> +#define DLB2_PCI_EXP_DEVCTL 8
> +#define DLB2_PCI_LNKCTL 16
> +#define DLB2_PCI_SLTCTL 24
> +#define DLB2_PCI_RTCTL 28
> +#define DLB2_PCI_EXP_DEVCTL2 40
> +#define DLB2_PCI_LNKCTL2 48
> +#define DLB2_PCI_SLTCTL2 56
> +#define DLB2_PCI_CMD 4
> +#define DLB2_PCI_X_CMD 2
> +#define DLB2_PCI_EXP_DEVSTA 10
> +#define DLB2_PCI_EXP_DEVSTA_TRPND 0x20
> +#define DLB2_PCI_EXP_DEVCTL_BCR_FLR 0x8000
> +
> +#define DLB2_PCI_CAP_ID_EXP       0x10
> +#define DLB2_PCI_CAP_ID_MSIX      0x11
> +#define DLB2_PCI_EXT_CAP_ID_PAS   0x1B
> +#define DLB2_PCI_EXT_CAP_ID_PRI   0x13
> +#define DLB2_PCI_EXT_CAP_ID_ACS   0xD
> +
> +#define DLB2_PCI_PRI_CTRL_ENABLE         0x1
> +#define DLB2_PCI_PRI_ALLOC_REQ           0xC
> +#define DLB2_PCI_PRI_CTRL                0x4
> +#define DLB2_PCI_MSIX_FLAGS              0x2
> +#define DLB2_PCI_MSIX_FLAGS_ENABLE       0x8000
> +#define DLB2_PCI_MSIX_FLAGS_MASKALL      0x4000
> +#define DLB2_PCI_ERR_ROOT_STATUS         0x30
> +#define DLB2_PCI_ERR_COR_STATUS          0x10
> +#define DLB2_PCI_ERR_UNCOR_STATUS        0x4
> +#define DLB2_PCI_COMMAND_INTX_DISABLE    0x400
> +#define DLB2_PCI_ACS_CAP                 0x4
> +#define DLB2_PCI_ACS_CTRL                0x6
> +#define DLB2_PCI_ACS_SV                  0x1
> +#define DLB2_PCI_ACS_RR                  0x4
> +#define DLB2_PCI_ACS_CR                  0x8
> +#define DLB2_PCI_ACS_UF                  0x10
> +#define DLB2_PCI_ACS_EC                  0x20
> +
> +static int
> +dlb2_pci_find_ext_capability(struct rte_pci_device *pdev, uint32_t id)
> +{
> +       uint32_t hdr;
> +       size_t sz;
> +       int pos;
> +
> +       pos = DLB2_PCI_CFG_SPACE_SIZE;
> +       sz = sizeof(hdr);
> +
> +       while (pos > 0xFF) {
> +               if (rte_pci_read_config(pdev, &hdr, sz, pos) != (int)sz)
> +                       return -1;
> +
> +               if (DLB2_PCI_EXT_CAP_ID(hdr) == id)
> +                       return pos;
> +
> +               pos = DLB2_PCI_EXT_CAP_NEXT(hdr);
> +       }
> +
> +       return -1;
> +}
> +
> +static int dlb2_pci_find_capability(struct rte_pci_device *pdev, uint32_t id)
> +{
> +       uint8_t pos;
> +       int ret;
> +       uint16_t hdr;
> +
> +       ret = rte_pci_read_config(pdev, &pos, 1, DLB2_PCI_CAP_POINTER);
> +       pos &= 0xFC;
> +
> +       if (ret != 1)
> +               return -1;
> +
> +       while (pos > 0x3F) {
> +               ret = rte_pci_read_config(pdev, &hdr, 2, pos);
> +               if (ret != 2)
> +                       return -1;
> +
> +               if (DLB2_PCI_CAP_ID(hdr) == id)
> +                       return pos;
> +
> +               if (DLB2_PCI_CAP_ID(hdr) == 0xFF)
> +                       return -1;
> +
> +               pos = DLB2_PCI_CAP_NEXT(hdr);
> +       }
> +
> +       return -1;
> +}
> +
> +static int
> +dlb2_pf_init_driver_state(struct dlb2_dev *dlb2_dev)
> +{
> +       int i;
> +
> +       if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_MOVDIR64B))
> +               dlb2_dev->enqueue_four = dlb2_movdir64b;
> +       else
> +               dlb2_dev->enqueue_four = dlb2_movntdq;
> +
> +       /* Initialize software state */
> +       for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++)
> +               dlb2_list_init_head(&dlb2_dev->ldb_port_pages[i].list);
> +
> +       for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS; i++)
> +               dlb2_list_init_head(&dlb2_dev->dir_port_pages[i].list);
> +
> +       rte_spinlock_init(&dlb2_dev->resource_mutex);
> +       rte_spinlock_init(&dlb2_dev->measurement_lock);
> +
> +       return 0;
> +}
> +
> +static void dlb2_pf_enable_pm(struct dlb2_dev *dlb2_dev)
> +{
> +       dlb2_clr_pmcsr_disable(&dlb2_dev->hw);
> +}
> +
> +#define DLB2_READY_RETRY_LIMIT 1000
> +static int dlb2_pf_wait_for_device_ready(struct dlb2_dev *dlb2_dev)
> +{
> +       u32 retries = 0;
> +
> +       /* Allow at least 1s for the device to become active after power-on */
> +       for (retries = 0; retries < DLB2_READY_RETRY_LIMIT; retries++) {
> +               union dlb2_cfg_mstr_cfg_diagnostic_idle_status idle;
> +               union dlb2_cfg_mstr_cfg_pm_status pm_st;
> +               u32 addr;
> +
> +               addr = DLB2_CFG_MSTR_CFG_PM_STATUS;
> +               pm_st.val = DLB2_CSR_RD(&dlb2_dev->hw, addr);
> +               addr = DLB2_CFG_MSTR_CFG_DIAGNOSTIC_IDLE_STATUS;
> +               idle.val = DLB2_CSR_RD(&dlb2_dev->hw, addr);
> +               if (pm_st.field.pmsm == 1 && idle.field.dlb_func_idle == 1)
> +                       break;
> +
> +               rte_delay_ms(1);
> +       };
> +
> +       if (retries == DLB2_READY_RETRY_LIMIT) {
> +               printf("[%s()] wait for device ready timed out\n",
> +                      __func__);
> +               return -1;
> +       }
> +
> +       return 0;
> +}
> +
> +struct dlb2_dev *
> +dlb2_probe(struct rte_pci_device *pdev)
> +{
> +       struct dlb2_dev *dlb2_dev;
> +       int ret = 0;
> +
> +       DLB2_INFO(dlb2_dev, "probe\n");
> +
> +       dlb2_dev = rte_malloc("DLB2_PF", sizeof(struct dlb2_dev),
> +                             RTE_CACHE_LINE_SIZE);
> +
> +       if (dlb2_dev == NULL) {
> +               ret = -ENOMEM;
> +               goto dlb2_dev_malloc_fail;
> +       }
> +
> +       /* PCI Bus driver has already mapped bar space into process.
> +        * Save off our IO register and FUNC addresses.
> +        */
> +
> +       /* BAR 0 */
> +       if (pdev->mem_resource[0].addr == NULL) {
> +               DLB2_ERR(dlb2_dev, "probe: BAR 0 addr (csr_kva) is NULL\n");
> +               ret = -EINVAL;
> +               goto pci_mmap_bad_addr;
> +       }
> +       dlb2_dev->hw.func_kva = (void *)(uintptr_t)pdev->mem_resource[0].addr;
> +       dlb2_dev->hw.func_phys_addr = pdev->mem_resource[0].phys_addr;
> +
> +       DLB2_INFO(dlb2_dev, "DLB2 FUNC VA=%p, PA=%p, len=%p\n",
> +                 (void *)dlb2_dev->hw.func_kva,
> +                 (void *)dlb2_dev->hw.func_phys_addr,
> +                 (void *)(pdev->mem_resource[0].len));
> +
> +       /* BAR 2 */
> +       if (pdev->mem_resource[2].addr == NULL) {
> +               DLB2_ERR(dlb2_dev, "probe: BAR 2 addr (func_kva) is NULL\n");
> +               ret = -EINVAL;
> +               goto pci_mmap_bad_addr;
> +       }
> +       dlb2_dev->hw.csr_kva = (void *)(uintptr_t)pdev->mem_resource[2].addr;
> +       dlb2_dev->hw.csr_phys_addr = pdev->mem_resource[2].phys_addr;
> +
> +       DLB2_INFO(dlb2_dev, "DLB2 CSR VA=%p, PA=%p, len=%p\n",
> +                 (void *)dlb2_dev->hw.csr_kva,
> +                 (void *)dlb2_dev->hw.csr_phys_addr,
> +                 (void *)(pdev->mem_resource[2].len));
> +
> +       dlb2_dev->pdev = pdev;
> +
> +       /* PM enable must be done before any other MMIO accesses, and this
> +        * setting is persistent across device reset.
> +        */
> +       dlb2_pf_enable_pm(dlb2_dev);
> +
> +       ret = dlb2_pf_wait_for_device_ready(dlb2_dev);
> +       if (ret)
> +               goto wait_for_device_ready_fail;
> +
> +       ret = dlb2_pf_reset(dlb2_dev);
> +       if (ret)
> +               goto dlb2_reset_fail;
> +
> +       ret = dlb2_pf_init_driver_state(dlb2_dev);
> +       if (ret)
> +               goto init_driver_state_fail;
> +
> +       ret = dlb2_resource_init(&dlb2_dev->hw);
> +       if (ret)
> +               goto resource_init_fail;
> +
> +       return dlb2_dev;
> +
> +resource_init_fail:
> +       dlb2_resource_free(&dlb2_dev->hw);
> +init_driver_state_fail:
> +dlb2_reset_fail:
> +pci_mmap_bad_addr:
> +wait_for_device_ready_fail:
> +       rte_free(dlb2_dev);
> +dlb2_dev_malloc_fail:
> +       rte_errno = ret;
> +       return NULL;
> +}
> +
> +int
> +dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
> +{
> +       int ret = 0;
> +       int i = 0;
> +       uint32_t dword[16];
> +       uint16_t cmd;
> +       off_t off;
> +
> +       uint16_t dev_ctl_word;
> +       uint16_t dev_ctl2_word;
> +       uint16_t lnk_word;
> +       uint16_t lnk_word2;
> +       uint16_t slt_word;
> +       uint16_t slt_word2;
> +       uint16_t rt_ctl_word;
> +       uint32_t pri_reqs_dword;
> +       uint16_t pri_ctrl_word;
> +
> +       int pcie_cap_offset;
> +       int pri_cap_offset;
> +       int msix_cap_offset;
> +       int err_cap_offset;
> +       int acs_cap_offset;
> +       int wait_count;
> +
> +       uint16_t devsta_busy_word;
> +       uint16_t devctl_word;
> +
> +       struct rte_pci_device *pdev = dlb2_dev->pdev;
> +
> +       /* Save PCI config state */
> +
> +       for (i = 0; i < 16; i++) {
> +               if (rte_pci_read_config(pdev, &dword[i], 4, i * 4) != 4)
> +                       return ret;
> +       }
> +
> +       pcie_cap_offset = dlb2_pci_find_capability(pdev, DLB2_PCI_CAP_ID_EXP);
> +
> +       if (pcie_cap_offset < 0) {
> +               printf("[%s()] failed to find the pcie capability\n",
> +                      __func__);
> +               return pcie_cap_offset;
> +       }
> +
> +       off = pcie_cap_offset + DLB2_PCI_EXP_DEVCTL;
> +       if (rte_pci_read_config(pdev, &dev_ctl_word, 2, off) != 2)
> +               dev_ctl_word = 0;
> +
> +       off = pcie_cap_offset + DLB2_PCI_LNKCTL;
> +       if (rte_pci_read_config(pdev, &lnk_word, 2, off) != 2)
> +               lnk_word = 0;
> +
> +       off = pcie_cap_offset + DLB2_PCI_SLTCTL;
> +       if (rte_pci_read_config(pdev, &slt_word, 2, off) != 2)
> +               slt_word = 0;
> +
> +       off = pcie_cap_offset + DLB2_PCI_RTCTL;
> +       if (rte_pci_read_config(pdev, &rt_ctl_word, 2, off) != 2)
> +               rt_ctl_word = 0;
> +
> +       off = pcie_cap_offset + DLB2_PCI_EXP_DEVCTL2;
> +       if (rte_pci_read_config(pdev, &dev_ctl2_word, 2, off) != 2)
> +               dev_ctl2_word = 0;
> +
> +       off = pcie_cap_offset + DLB2_PCI_LNKCTL2;
> +       if (rte_pci_read_config(pdev, &lnk_word2, 2, off) != 2)
> +               lnk_word2 = 0;
> +
> +       off = pcie_cap_offset + DLB2_PCI_SLTCTL2;
> +       if (rte_pci_read_config(pdev, &slt_word2, 2, off) != 2)
> +               slt_word2 = 0;
> +
> +       off = DLB2_PCI_EXT_CAP_ID_PRI;
> +       pri_cap_offset = dlb2_pci_find_ext_capability(pdev, off);
> +
> +       if (pri_cap_offset >= 0) {
> +               off = pri_cap_offset + DLB2_PCI_PRI_ALLOC_REQ;
> +               if (rte_pci_read_config(pdev, &pri_reqs_dword, 4, off) != 4)
> +                       pri_reqs_dword = 0;
> +       }
> +
> +       /* clear the PCI command register before issuing the FLR */
> +
> +       off = DLB2_PCI_CMD;
> +       cmd = 0;
> +       if (rte_pci_write_config(pdev, &cmd, 2, off) != 2) {
> +               printf("[%s()] failed to write the pci command\n",
> +                      __func__);
> +               return ret;
> +       }
> +
> +       /* issue the FLR */
> +       for (wait_count = 0; wait_count < 4; wait_count++) {
> +               int sleep_time;
> +
> +               off = pcie_cap_offset + DLB2_PCI_EXP_DEVSTA;
> +               ret = rte_pci_read_config(pdev, &devsta_busy_word, 2, off);
> +               if (ret != 2) {
> +                       printf("[%s()] failed to read the pci device status\n",
> +                              __func__);
> +                       return ret;
> +               }
> +
> +               if (!(devsta_busy_word & DLB2_PCI_EXP_DEVSTA_TRPND))
> +                       break;
> +
> +               sleep_time = (1 << (wait_count)) * 100;
> +               rte_delay_ms(sleep_time);
> +       }
> +
> +       if (wait_count == 4) {
> +               printf("[%s()] wait for pci pending transactions timed out\n",
> +                      __func__);
> +               return -1;
> +       }
> +
> +       off = pcie_cap_offset + DLB2_PCI_EXP_DEVCTL;
> +       ret = rte_pci_read_config(pdev, &devctl_word, 2, off);
> +       if (ret != 2) {
> +               printf("[%s()] failed to read the pcie device control\n",
> +                      __func__);
> +               return ret;
> +       }
> +
> +       devctl_word |= DLB2_PCI_EXP_DEVCTL_BCR_FLR;
> +
> +       ret = rte_pci_write_config(pdev, &devctl_word, 2, off);
> +       if (ret != 2) {
> +               printf("[%s()] failed to write the pcie device control\n",
> +                      __func__);
> +               return ret;
> +       }
> +
> +       rte_delay_ms(100);
> +
> +       /* Restore PCI config state */
> +
> +       if (pcie_cap_offset >= 0) {
> +               off = pcie_cap_offset + DLB2_PCI_EXP_DEVCTL;
> +               ret = rte_pci_write_config(pdev, &dev_ctl_word, 2, off);
> +               if (ret != 2) {
> +                       printf("[%s()] failed to write the pcie device control at offset %d\n",
> +                               __func__, (int)off);
> +                       return ret;
> +               }
> +
> +               off = pcie_cap_offset + DLB2_PCI_LNKCTL;
> +               ret = rte_pci_write_config(pdev, &lnk_word, 2, off);
> +               if (ret != 2) {
> +                       printf("[%s()] failed to write the pcie config space at offset %d\n",
> +                               __func__, (int)off);
> +                       return ret;
> +               }
> +
> +               off = pcie_cap_offset + DLB2_PCI_SLTCTL;
> +               ret = rte_pci_write_config(pdev, &slt_word, 2, off);
> +               if (ret != 2) {
> +                       printf("[%s()] failed to write the pcie config space at offset %d\n",
> +                               __func__, (int)off);
> +                       return ret;
> +               }
> +
> +               off = pcie_cap_offset + DLB2_PCI_RTCTL;
> +               ret = rte_pci_write_config(pdev, &rt_ctl_word, 2, off);
> +               if (ret != 2) {
> +                       printf("[%s()] failed to write the pcie config space at offset %d\n",
> +                               __func__, (int)off);
> +                       return ret;
> +               }
> +
> +               off = pcie_cap_offset + DLB2_PCI_EXP_DEVCTL2;
> +               ret = rte_pci_write_config(pdev, &dev_ctl2_word, 2, off);
> +               if (ret != 2) {
> +                       printf("[%s()] failed to write the pcie config space at offset %d\n",
> +                               __func__, (int)off);
> +                       return ret;
> +               }
> +
> +               off = pcie_cap_offset + DLB2_PCI_LNKCTL2;
> +               ret = rte_pci_write_config(pdev, &lnk_word2, 2, off);
> +               if (ret != 2) {
> +                       printf("[%s()] failed to write the pcie config space at offset %d\n",
> +                               __func__, (int)off);
> +                       return ret;
> +               }
> +
> +               off = pcie_cap_offset + DLB2_PCI_SLTCTL2;
> +               ret = rte_pci_write_config(pdev, &slt_word2, 2, off);
> +               if (ret != 2) {
> +                       printf("[%s()] failed to write the pcie config space at offset %d\n",
> +                               __func__, (int)off);
> +                       return ret;
> +               }
> +       }
> +
> +       if (pri_cap_offset >= 0) {
> +               pri_ctrl_word = DLB2_PCI_PRI_CTRL_ENABLE;
> +
> +               off = pri_cap_offset + DLB2_PCI_PRI_ALLOC_REQ;
> +               ret = rte_pci_write_config(pdev, &pri_reqs_dword, 4, off);
> +               if (ret != 4) {
> +                       printf("[%s()] failed to write the pcie config space at offset %d\n",
> +                               __func__, (int)off);
> +                       return ret;
> +               }
> +
> +               off = pri_cap_offset + DLB2_PCI_PRI_CTRL;
> +               ret = rte_pci_write_config(pdev, &pri_ctrl_word, 2, off);
> +               if (ret != 2) {
> +                       printf("[%s()] failed to write the pcie config space at offset %d\n",
> +                               __func__, (int)off);
> +                       return ret;
> +               }
> +       }
> +
> +       off = DLB2_PCI_EXT_CAP_ID_ERR;
> +       err_cap_offset = dlb2_pci_find_ext_capability(pdev, off);
> +
> +       if (err_cap_offset >= 0) {
> +               uint32_t tmp;
> +
> +               off = err_cap_offset + DLB2_PCI_ERR_ROOT_STATUS;
> +               if (rte_pci_read_config(pdev, &tmp, 4, off) != 4)
> +                       tmp = 0;
> +
> +               ret = rte_pci_write_config(pdev, &tmp, 4, off);
> +               if (ret != 4) {
> +                       printf("[%s()] failed to write the pcie config space at offset %d\n",
> +                               __func__, (int)off);
> +                       return ret;
> +               }
> +
> +               off = err_cap_offset + DLB2_PCI_ERR_COR_STATUS;
> +               if (rte_pci_read_config(pdev, &tmp, 4, off) != 4)
> +                       tmp = 0;
> +
> +               ret = rte_pci_write_config(pdev, &tmp, 4, off);
> +               if (ret != 4) {
> +                       printf("[%s()] failed to write the pcie config space at offset %d\n",
> +                               __func__, (int)off);
> +                       return ret;
> +               }
> +
> +               off = err_cap_offset + DLB2_PCI_ERR_UNCOR_STATUS;
> +               if (rte_pci_read_config(pdev, &tmp, 4, off) != 4)
> +                       tmp = 0;
> +
> +               ret = rte_pci_write_config(pdev, &tmp, 4, off);
> +               if (ret != 4) {
> +                       printf("[%s()] failed to write the pcie config space at offset %d\n",
> +                               __func__, (int)off);
> +                       return ret;
> +               }
> +       }
> +
> +       for (i = 16; i > 0; i--) {
> +               off = (i - 1) * 4;
> +               ret = rte_pci_write_config(pdev, &dword[i - 1], 4, off);
> +               if (ret != 4) {
> +                       printf("[%s()] failed to write the pcie config space at offset %d\n",
> +                               __func__, (int)off);
> +                       return ret;
> +               }
> +       }
> +
> +       off = DLB2_PCI_CMD;
> +       if (rte_pci_read_config(pdev, &cmd, 2, off) == 2) {
> +               cmd &= ~DLB2_PCI_COMMAND_INTX_DISABLE;
> +               if (rte_pci_write_config(pdev, &cmd, 2, off) != 2) {
> +                       printf("[%s()] failed to write the pci command\n",
> +                              __func__);
> +                       return ret;
> +               }
> +       }
> +
> +       msix_cap_offset = dlb2_pci_find_capability(pdev,
> +                                                  DLB2_PCI_CAP_ID_MSIX);
> +       if (msix_cap_offset >= 0) {
> +               off = msix_cap_offset + DLB2_PCI_MSIX_FLAGS;
> +               if (rte_pci_read_config(pdev, &cmd, 2, off) == 2) {
> +                       cmd |= DLB2_PCI_MSIX_FLAGS_ENABLE;
> +                       cmd |= DLB2_PCI_MSIX_FLAGS_MASKALL;
> +                       if (rte_pci_write_config(pdev, &cmd, 2, off) != 2) {
> +                               printf("[%s()] failed to write msix flags\n",
> +                                      __func__);
> +                               return ret;
> +                       }
> +               }
> +
> +               off = msix_cap_offset + DLB2_PCI_MSIX_FLAGS;
> +               if (rte_pci_read_config(pdev, &cmd, 2, off) == 2) {
> +                       cmd &= ~DLB2_PCI_MSIX_FLAGS_MASKALL;
> +                       if (rte_pci_write_config(pdev, &cmd, 2, off) != 2) {
> +                               printf("[%s()] failed to write msix flags\n",
> +                                      __func__);
> +                               return ret;
> +                       }
> +               }
> +       }
> +
> +       off = DLB2_PCI_EXT_CAP_ID_ACS;
> +       acs_cap_offset = dlb2_pci_find_ext_capability(pdev, off);
> +
> +       if (acs_cap_offset >= 0) {
> +               uint16_t acs_cap, acs_ctrl, acs_mask;
> +               off = acs_cap_offset + DLB2_PCI_ACS_CAP;
> +               if (rte_pci_read_config(pdev, &acs_cap, 2, off) != 2)
> +                       acs_cap = 0;
> +
> +               off = acs_cap_offset + DLB2_PCI_ACS_CTRL;
> +               if (rte_pci_read_config(pdev, &acs_ctrl, 2, off) != 2)
> +                       acs_ctrl = 0;
> +
> +               acs_mask = DLB2_PCI_ACS_SV | DLB2_PCI_ACS_RR;
> +               acs_mask |= (DLB2_PCI_ACS_CR | DLB2_PCI_ACS_UF);
> +               acs_ctrl |= (acs_cap & acs_mask);
> +
> +               ret = rte_pci_write_config(pdev, &acs_ctrl, 2, off);
> +               if (ret != 2) {
> +                       printf("[%s()] failed to write the pcie config space at offset %d\n",
> +                               __func__, (int)off);
> +                       return ret;
> +               }
> +
> +               off = acs_cap_offset + DLB2_PCI_ACS_CTRL;
> +               if (rte_pci_read_config(pdev, &acs_ctrl, 2, off) != 2)
> +                       acs_ctrl = 0;
> +
> +               acs_mask = DLB2_PCI_ACS_RR | DLB2_PCI_ACS_CR;
> +               acs_mask |= DLB2_PCI_ACS_EC;
> +               acs_ctrl &= ~acs_mask;
> +
> +               off = acs_cap_offset + DLB2_PCI_ACS_CTRL;
> +               ret = rte_pci_write_config(pdev, &acs_ctrl, 2, off);
> +               if (ret != 2) {
> +                       printf("[%s()] failed to write the pcie config space at offset %d\n",
> +                               __func__, (int)off);
> +                       return ret;
> +               }
> +       }
> +
> +       return 0;
> +}
> diff --git a/drivers/event/dlb2/pf/dlb2_main.h b/drivers/event/dlb2/pf/dlb2_main.h
> new file mode 100644
> index 0000000..ec96f11
> --- /dev/null
> +++ b/drivers/event/dlb2/pf/dlb2_main.h
> @@ -0,0 +1,106 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2016-2020 Intel Corporation
> + */
> +
> +#ifndef __DLB2_MAIN_H
> +#define __DLB2_MAIN_H
> +
> +#include <rte_debug.h>
> +#include <rte_log.h>
> +#include <rte_spinlock.h>
> +#include <rte_pci.h>
> +#include <rte_bus_pci.h>
> +
> +#ifndef PAGE_SIZE
> +#define PAGE_SIZE (sysconf(_SC_PAGESIZE))
> +#endif
> +
> +#include "base/dlb2_hw_types.h"
> +#include "../dlb2_user.h"
> +
> +#define DLB2_DEFAULT_UNREGISTER_TIMEOUT_S 5
> +
> +struct dlb2_dev;
> +
> +struct dlb2_port_memory {
> +       struct dlb2_list_head list;
> +       void *cq_base;
> +       bool valid;
> +};
> +
> +struct dlb2_dev {
> +       struct rte_pci_device *pdev;
> +       struct dlb2_hw hw;
> +       /* struct list_head list; */
> +       struct device *dlb2_device;
> +       struct dlb2_port_memory ldb_port_pages[DLB2_MAX_NUM_LDB_PORTS];
> +       struct dlb2_port_memory dir_port_pages[DLB2_MAX_NUM_DIR_PORTS];
> +       /* The enqueue_four function enqueues four HCWs (one cache-line worth)
> +        * to the DLB2, using whichever mechanism is supported by the platform
> +        * on which this driver is running.
> +        */
> +       void (*enqueue_four)(void *qe4, void *pp_addr);
> +
> +       bool domain_reset_failed;
> +       /* The resource mutex serializes access to driver data structures and
> +        * hardware registers.
> +        */
> +       rte_spinlock_t resource_mutex;
> +       rte_spinlock_t measurement_lock;
> +       bool worker_launched;
> +       u8 revision;
> +};
> +
> +struct dlb2_dev *dlb2_probe(struct rte_pci_device *pdev);
> +
> +int dlb2_pf_reset(struct dlb2_dev *dlb2_dev);
> +int dlb2_pf_create_sched_domain(struct dlb2_hw *hw,
> +                               struct dlb2_create_sched_domain_args *args,
> +                               struct dlb2_cmd_response *resp);
> +int dlb2_pf_create_ldb_queue(struct dlb2_hw *hw,
> +                            u32 domain_id,
> +                            struct dlb2_create_ldb_queue_args *args,
> +                            struct dlb2_cmd_response *resp);
> +int dlb2_pf_create_dir_queue(struct dlb2_hw *hw,
> +                            u32 domain_id,
> +                            struct dlb2_create_dir_queue_args *args,
> +                            struct dlb2_cmd_response *resp);
> +int dlb2_pf_create_ldb_port(struct dlb2_hw *hw,
> +                           u32 domain_id,
> +                           struct dlb2_create_ldb_port_args *args,
> +                           uintptr_t cq_dma_base,
> +                           struct dlb2_cmd_response *resp);
> +int dlb2_pf_create_dir_port(struct dlb2_hw *hw,
> +                           u32 domain_id,
> +                           struct dlb2_create_dir_port_args *args,
> +                           uintptr_t cq_dma_base,
> +                           struct dlb2_cmd_response *resp);
> +int dlb2_pf_start_domain(struct dlb2_hw *hw,
> +                        u32 domain_id,
> +                        struct dlb2_start_domain_args *args,
> +                        struct dlb2_cmd_response *resp);
> +int dlb2_pf_enable_ldb_port(struct dlb2_hw *hw,
> +                           u32 domain_id,
> +                           struct dlb2_enable_ldb_port_args *args,
> +                           struct dlb2_cmd_response *resp);
> +int dlb2_pf_disable_ldb_port(struct dlb2_hw *hw,
> +                            u32 domain_id,
> +                            struct dlb2_disable_ldb_port_args *args,
> +                            struct dlb2_cmd_response *resp);
> +int dlb2_pf_enable_dir_port(struct dlb2_hw *hw,
> +                           u32 domain_id,
> +                           struct dlb2_enable_dir_port_args *args,
> +                           struct dlb2_cmd_response *resp);
> +int dlb2_pf_disable_dir_port(struct dlb2_hw *hw,
> +                            u32 domain_id,
> +                            struct dlb2_disable_dir_port_args *args,
> +                            struct dlb2_cmd_response *resp);
> +int dlb2_pf_reset_domain(struct dlb2_hw *hw, u32 domain_id);
> +int dlb2_pf_ldb_port_owned_by_domain(struct dlb2_hw *hw,
> +                                    u32 domain_id,
> +                                    u32 port_id);
> +int dlb2_pf_dir_port_owned_by_domain(struct dlb2_hw *hw,
> +                                    u32 domain_id,
> +                                    u32 port_id);
> +
> +#endif /* __DLB2_MAIN_H */
> diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
> new file mode 100644
> index 0000000..7d64309
> --- /dev/null
> +++ b/drivers/event/dlb2/pf/dlb2_pf.c
> @@ -0,0 +1,244 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2016-2020 Intel Corporation
> + */
> +
> +#include <stdint.h>
> +#include <stdbool.h>
> +#include <stdio.h>
> +#include <sys/mman.h>
> +#include <sys/fcntl.h>
> +#include <sys/time.h>
> +#include <errno.h>
> +#include <assert.h>
> +#include <unistd.h>
> +#include <string.h>
> +#include <rte_debug.h>
> +#include <rte_log.h>
> +#include <rte_dev.h>
> +#include <rte_devargs.h>
> +#include <rte_mbuf.h>
> +#include <rte_ring.h>
> +#include <rte_errno.h>
> +#include <rte_kvargs.h>
> +#include <rte_malloc.h>
> +#include <rte_cycles.h>
> +#include <rte_io.h>
> +#include <rte_pci.h>
> +#include <rte_bus_pci.h>
> +#include <rte_eventdev.h>
> +#include <rte_eventdev_pmd.h>
> +#include <rte_eventdev_pmd_pci.h>
> +#include <rte_memory.h>
> +#include <rte_string_fns.h>
> +
> +#include "../dlb2_priv.h"
> +#include "../dlb2_iface.h"
> +#include "../dlb2_inline_fns.h"
> +#include "dlb2_main.h"
> +#include "base/dlb2_hw_types.h"
> +#include "base/dlb2_osdep.h"
> +#include "base/dlb2_resource.h"
> +
> +static const char *event_dlb2_pf_name = RTE_STR(EVDEV_DLB2_NAME_PMD);
> +
> +static void
> +dlb2_pf_low_level_io_init(void)
> +{
> +       int i;
> +       /* Addresses will be initialized at port create */
> +       for (i = 0; i < DLB2_MAX_NUM_PORTS; i++) {
> +               /* First directed ports */
> +               dlb2_port[i][DLB2_DIR_PORT].pp_addr = NULL;
> +               dlb2_port[i][DLB2_DIR_PORT].cq_base = NULL;
> +               dlb2_port[i][DLB2_DIR_PORT].mmaped = true;
> +
> +               /* Now load balanced ports */
> +               dlb2_port[i][DLB2_LDB_PORT].pp_addr = NULL;
> +               dlb2_port[i][DLB2_LDB_PORT].cq_base = NULL;
> +               dlb2_port[i][DLB2_LDB_PORT].mmaped = true;
> +       }
> +}
> +
> +static int
> +dlb2_pf_open(struct dlb2_hw_dev *handle, const char *name)
> +{
> +       RTE_SET_USED(handle);
> +       RTE_SET_USED(name);
> +
> +       return 0;
> +}
> +
> +static int
> +dlb2_pf_get_device_version(struct dlb2_hw_dev *handle,
> +                          uint8_t *revision)
> +{
> +       struct dlb2_dev *dlb2_dev = (struct dlb2_dev *)handle->pf_dev;
> +
> +       *revision = dlb2_dev->revision;
> +
> +       return 0;
> +}
> +
> +static void
> +dlb2_pf_hardware_init(struct dlb2_hw_dev *handle)
> +{
> +       struct dlb2_dev *dlb2_dev = (struct dlb2_dev *)handle->pf_dev;
> +
> +       dlb2_hw_enable_sparse_ldb_cq_mode(&dlb2_dev->hw);
> +       dlb2_hw_enable_sparse_dir_cq_mode(&dlb2_dev->hw);
> +}
> +
> +static int
> +dlb2_pf_get_num_resources(struct dlb2_hw_dev *handle,
> +                         struct dlb2_get_num_resources_args *rsrcs)
> +{
> +       struct dlb2_dev *dlb2_dev = (struct dlb2_dev *)handle->pf_dev;
> +
> +       return dlb2_hw_get_num_resources(&dlb2_dev->hw, rsrcs, false, 0);
> +}
> +
> +static int
> +dlb2_pf_get_cq_poll_mode(struct dlb2_hw_dev *handle,
> +                        enum dlb2_cq_poll_modes *mode)
> +{
> +       RTE_SET_USED(handle);
> +
> +       *mode = DLB2_CQ_POLL_MODE_SPARSE;
> +
> +       return 0;
> +}
> +
> +static void
> +dlb2_pf_iface_fn_ptrs_init(void)
> +{
> +       dlb2_iface_low_level_io_init = dlb2_pf_low_level_io_init;
> +       dlb2_iface_open = dlb2_pf_open;
> +       dlb2_iface_get_device_version = dlb2_pf_get_device_version;
> +       dlb2_iface_hardware_init = dlb2_pf_hardware_init;
> +       dlb2_iface_get_num_resources = dlb2_pf_get_num_resources;
> +       dlb2_iface_get_cq_poll_mode = dlb2_pf_get_cq_poll_mode;
> +}
> +
> +/* PCI DEV HOOKS */
> +static int
> +dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
> +{
> +       int ret = 0;
> +       struct rte_pci_device *pci_dev;
> +       struct dlb2_devargs dlb2_args = {
> +               .socket_id = rte_socket_id(),
> +               .max_num_events = DLB2_MAX_NUM_LDB_CREDITS,
> +               .num_dir_credits_override = -1,
> +               .qid_depth_thresholds = { {0} },
> +               .cos_id = DLB2_COS_DEFAULT
> +       };
> +       struct dlb2_eventdev *dlb2;
> +
> +       DLB2_LOG_DBG("Enter with dev_id=%d socket_id=%d",
> +                    eventdev->data->dev_id, eventdev->data->socket_id);
> +
> +       dlb2_pf_iface_fn_ptrs_init();
> +
> +       pci_dev = RTE_DEV_TO_PCI(eventdev->dev);
> +
> +       if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> +               dlb2 = dlb2_pmd_priv(eventdev); /* rte_zmalloc_socket mem */
> +
> +               /* Probe the DLB2 PF layer */
> +               dlb2->qm_instance.pf_dev = dlb2_probe(pci_dev);
> +
> +               if (dlb2->qm_instance.pf_dev == NULL) {
> +                       DLB2_LOG_ERR("DLB2 PF Probe failed with error %d\n",
> +                                    rte_errno);
> +                       ret = -rte_errno;
> +                       goto dlb2_probe_failed;
> +               }
> +
> +               /* Were we invoked with runtime parameters? */
> +               if (pci_dev->device.devargs) {
> +                       ret = dlb2_parse_params(pci_dev->device.devargs->args,
> +                                               pci_dev->device.devargs->name,
> +                                               &dlb2_args);
> +                       if (ret) {
> +                               DLB2_LOG_ERR("PFPMD failed to parse args ret=%d, errno=%d\n",
> +                                            ret, rte_errno);
> +                               goto dlb2_probe_failed;
> +                       }
> +               }
> +
> +               ret = dlb2_primary_eventdev_probe(eventdev,
> +                                                 event_dlb2_pf_name,
> +                                                 &dlb2_args);
> +       } else {
> +               ret = dlb2_secondary_eventdev_probe(eventdev,
> +                                                   event_dlb2_pf_name);
> +       }
> +       if (ret)
> +               goto dlb2_probe_failed;
> +
> +       DLB2_LOG_INFO("DLB2 PF Probe success\n");
> +
> +       return 0;
> +
> +dlb2_probe_failed:
> +
> +       DLB2_LOG_INFO("DLB2 PF Probe failed, ret=%d\n", ret);
> +
> +       return ret;
> +}
> +
> +#define EVENTDEV_INTEL_VENDOR_ID 0x8086
> +
> +static const struct rte_pci_id pci_id_dlb2_map[] = {
> +       {
> +               RTE_PCI_DEVICE(EVENTDEV_INTEL_VENDOR_ID,
> +                              PCI_DEVICE_ID_INTEL_DLB2_PF)
> +       },
> +       {
> +               .vendor_id = 0,
> +       },
> +};
> +
> +static int
> +event_dlb2_pci_probe(struct rte_pci_driver *pci_drv,
> +                    struct rte_pci_device *pci_dev)
> +{
> +       int ret;
> +
> +       ret = rte_event_pmd_pci_probe_named(pci_drv, pci_dev,
> +                                            sizeof(struct dlb2_eventdev),
> +                                            dlb2_eventdev_pci_init,
> +                                            event_dlb2_pf_name);
> +       if (ret) {
> +               DLB2_LOG_INFO("rte_event_pmd_pci_probe_named() failed, "
> +                               "ret=%d\n", ret);
> +       }
> +
> +       return ret;
> +}
> +
> +static int
> +event_dlb2_pci_remove(struct rte_pci_device *pci_dev)
> +{
> +       int ret;
> +
> +       ret = rte_event_pmd_pci_remove(pci_dev, NULL);
> +
> +       if (ret) {
> +               DLB2_LOG_INFO("rte_event_pmd_pci_remove() failed, "
> +                               "ret=%d\n", ret);
> +       }
> +
> +       return ret;
> +
> +}
> +
> +static struct rte_pci_driver pci_eventdev_dlb2_pmd = {
> +       .id_table = pci_id_dlb2_map,
> +       .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
> +       .probe = event_dlb2_pci_probe,
> +       .remove = event_dlb2_pci_remove,
> +};
> +
> +RTE_PMD_REGISTER_PCI(event_dlb2_pf, pci_eventdev_dlb2_pmd);
> +RTE_PMD_REGISTER_PCI_TABLE(event_dlb2_pf, pci_id_dlb2_map);
> --
> 2.6.4
>
Timothy McDaniel Oct. 20, 2020, 2:04 p.m. UTC | #2
> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Sunday, October 18, 2020 3:40 AM
> To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> Cc: Burakov, Anatoly <anatoly.burakov@intel.com>; dpdk-dev <dev@dpdk.org>;
> Carrillo, Erik G <Erik.G.Carrillo@intel.com>; Eads, Gage
> <gage.eads@intel.com>; Van Haaren, Harry <harry.van.haaren@intel.com>;
> Jerin Jacob <jerinj@marvell.com>
> Subject: Re: [dpdk-dev] [PATCH v2 06/22] event/dlb2: add probe
> 
> On Sat, Oct 17, 2020 at 11:52 PM Timothy McDaniel
> <timothy.mcdaniel@intel.com> wrote:
> >
> > The DLB2 hardware is a PCI device. This commit adds
> > support for probe and other initialization. The
> > dlb2_iface.[ch] files implement a flexible interface
> > that supports both the PF PMD and the bifurcated PMD.
> > The bifurcated PMD will be released in a future
> > patch set. Note that the flexible interface is only
> > used for configuration, and is not used in the data
> > path. The shared code is added in pf/base.
> > This PMD supports command line parameters, and those
> > are parsed at probe-time.
> >
> > Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> 
> 
> 
> There is a build issue clang10.
> 
> [for-main]dell[dpdk-next-eventdev] $ clang -v
> clang version 10.0.1
> Target: x86_64-pc-linux-gnu
> Thread model: posix
> InstalledDir: /usr/bin
> Found candidate GCC installation: /usr/bin/../lib/gcc/x86_64-pc-linux-
> gnu/10.2.0
> Found candidate GCC installation: /usr/bin/../lib/gcc/x86_64-pc-linux-gnu/8.4.0
> Found candidate GCC installation:
> /usr/bin/../lib64/gcc/x86_64-pc-linux-gnu/10.2.0
> Found candidate GCC installation:
> /usr/bin/../lib64/gcc/x86_64-pc-linux-gnu/8.4.0
> Found candidate GCC installation: /usr/lib/gcc/x86_64-pc-linux-gnu/10.2.0
> Found candidate GCC installation: /usr/lib/gcc/x86_64-pc-linux-gnu/8.4.0
> Found candidate GCC installation: /usr/lib64/gcc/x86_64-pc-linux-gnu/10.2.0
> Found candidate GCC installation: /usr/lib64/gcc/x86_64-pc-linux-gnu/8.4.0
> Selected GCC installation: /usr/bin/../lib64/gcc/x86_64-pc-linux-gnu/10.2.0
> Candidate multilib: .;@m64
> Candidate multilib: 32;@m32
> Selected multilib: .;@m64
> 
> meson  -Dexamples=l3fwd --buildtype=debugoptimized --werror
> --default-library=static /export/dpdk-next-eventdev/devtools/..
> ./build-clang-static
> 
> 
> 
> ccache clang -Idrivers/libtmp_rte_pmd_dlb2_event.a.p -Idrivers
> -I../drivers -Idrivers/event/dlb2 -I../drivers/event/dlb2
> -Ilib/librte_eventdev -I../lib/librte_eventdev -I. -I.. -Iconfig
> -I../config -Ilib/librte_eal/include -I../lib/librte_e
> al/include -Ilib/librte_eal/linux/include
> -I../lib/librte_eal/linux/include -Ilib/librte_eal/x86/include
> -I../lib/librte_eal/x86/include -Ilib/librte_eal/common
> -I../lib/librte_eal/common -Ilib/librte_eal -I../lib/librte_eal
> -Ilib/librte_kv
> args -I../lib/librte_kvargs -Ilib/librte_metrics
> -I../lib/librte_metrics -Ilib/librte_telemetry
> -I../lib/librte_telemetry -Ilib/librte_ring -I../lib/librte_ring
> -Ilib/librte_ethdev -I../lib/librte_ethdev -Ilib/librte_net
> -I../lib/librte_net
>  -Ilib/librte_mbuf -I../lib/librte_mbuf -Ilib/librte_mempool
> -I../lib/librte_mempool -Ilib/librte_meter -I../lib/librte_meter
> -Ilib/librte_hash -I../lib/librte_hash -Ilib/librte_timer
> -I../lib/librte_timer -Ilib/librte_cryptodev -I../lib/li
> brte_cryptodev -Ilib/librte_pci -I../lib/librte_pci -Idrivers/bus/pci
> -I../drivers/bus/pci -I../drivers/bus/pci/linux -Xclang
> -fcolor-diagnostics -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch
> -Werror -O2 -g -include rte_config.h -Wextra
> -Wcast-qual -Wdeprecated -Wformat-nonliteral -Wformat-security
> -Wmissing-declarations -Wmissing-prototypes -Wnested-externs
> -Wold-style-definition -Wpointer-arith -Wsign-compare
> -Wstrict-prototypes -Wundef -Wwrite-strings -Wno-address-of-pa
> cked-member -Wno-missing-field-initializers -D_GNU_SOURCE -fPIC
> -march=native -DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -MD -
> MQ
> drivers/libtmp_rte_pmd_dlb2_event.a.p/event_dlb2_pf_dlb2_pf.c.o -MF
> drivers/libtmp_rte_pmd_dlb2_event.a.p/ev
> ent_dlb2_pf_dlb2_pf.c.o.d -o
> drivers/libtmp_rte_pmd_dlb2_event.a.p/event_dlb2_pf_dlb2_pf.c.o -c
> ../drivers/event/dlb2/pf/dlb2_pf.c
> In file included from ../drivers/event/dlb2/pf/dlb2_pf.c:36:
> ../drivers/event/dlb2/pf/../dlb2_inline_fns.h:45:2: error: use of
> unknown builtin '__builtin_ia32_movntdq'
> [-Wimplicit-function-declaration]
>         __builtin_ia32_movntdq((__v2di *)pp_addr + 0, (__v2di)src_data0);
>         ^
> ../drivers/event/dlb2/pf/../dlb2_inline_fns.h:45:2: note: did you mean
> '__builtin_ia32_movntq'?
> /usr/lib/clang/10.0.1/include/xmmintrin.h:2122:3: note:
> '__builtin_ia32_movntq' declared here
>   __builtin_ia32_movntq(__p, __a);
>   ^
> In file included from ../drivers/event/dlb2/pf/dlb2_pf.c:36:
> ../drivers/event/dlb2/pf/../dlb2_inline_fns.h:61:2: error: use of
> unknown builtin '__builtin_ia32_movntdq'
> [-Wimplicit-function-declaration]
>         __builtin_ia32_movntdq((__v2di *)pp_addr, (__v2di)src_data0);
>         ^
> 2 errors generated.
> [2124/2479] Compiling C object
> drivers/libtmp_rte_pmd_dlb2_event.a.p/event_dlb2_dlb2.c.o
> FAILED: drivers/libtmp_rte_pmd_dlb2_event.a.p/event_dlb2_dlb2.c.o
> ccache clang -Idrivers/libtmp_rte_pmd_dlb2_event.a.p -Idrivers
> -I../drivers -Idrivers/event/dlb2 -I../drivers/event/dlb2
> -Ilib/librte_eventdev -I../lib/librte_eventdev -I. -I.. -Iconfig
> -I../config -Ilib/librte_eal/include -I../lib/librte_e
> al/include -Ilib/librte_eal/linux/include
> -I../lib/librte_eal/linux/include -Ilib/librte_eal/x86/include
> -I../lib/librte_eal/x86/include -Ilib/librte_eal/common
> -I../lib/librte_eal/common -Ilib/librte_eal -I../lib/librte_eal
> -Ilib/librte_kv
> args -I../lib/librte_kvargs -Ilib/librte_metrics
> -I../lib/librte_metrics -Ilib/librte_telemetry
> -I../lib/librte_telemetry -Ilib/librte_ring -I../lib/librte_ring
> -Ilib/librte_ethdev -I../lib/librte_ethdev -Ilib/librte_net
> -I../lib/librte_net
>  -Ilib/librte_mbuf -I../lib/librte_mbuf -Ilib/librte_mempool
> -I../lib/librte_mempool -Ilib/librte_meter -I../lib/librte_meter
> -Ilib/librte_hash -I../lib/librte_hash -Ilib/librte_timer
> -I../lib/librte_timer -Ilib/librte_cryptodev -I../lib/li
> brte_cryptodev -Ilib/librte_pci -I../lib/librte_pci -Idrivers/bus/pci
> -I../drivers/bus/pci -I../drivers/bus/pci/linux -Xclang
> -fcolor-diagnostics -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch
> -Werror -O2 -g -include rte_config.h -Wextra
> -Wcast-qual -Wdeprecated -Wformat-nonliteral -Wformat-security
> -Wmissing-declarations -Wmissing-prototypes -Wnested-externs
> -Wold-style-definition -Wpointer-arith -Wsign-compare
> -Wstrict-prototypes -Wundef -Wwrite-strings -Wno-address-of-pa
> cked-member -Wno-missing-field-initializers -D_GNU_SOURCE -fPIC
> -march=native -DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -MD -
> MQ
> drivers/libtmp_rte_pmd_dlb2_event.a.p/event_dlb2_dlb2.c.o -MF
> drivers/libtmp_rte_pmd_dlb2_event.a.p/event_dl
> b2_dlb2.c.o.d -o
> drivers/libtmp_rte_pmd_dlb2_event.a.p/event_dlb2_dlb2.c.o -c
> ../drivers/event/dlb2/dlb2.c
> In file included from ../drivers/event/dlb2/dlb2.c:35:
> ../drivers/event/dlb2/dlb2_inline_fns.h:45:2: error: use of unknown
> builtin '__builtin_ia32_movntdq' [-Wimplicit-function-declaration]
>         __builtin_ia32_movntdq((__v2di *)pp_addr + 0, (__v2di)src_data0);
>         ^
> ../drivers/event/dlb2/dlb2_inline_fns.h:45:2: note: did you mean
> '__builtin_ia32_movntq'?
> /usr/lib/clang/10.0.1/include/xmmintrin.h:2122:3: note:
> '__builtin_ia32_movntq' declared here
>   __builtin_ia32_movntq(__p, __a);
>   ^
> In file included from ../drivers/event/dlb2/dlb2.c:35:
> ../drivers/event/dlb2/dlb2_inline_fns.h:61:2: error: use of unknown
> builtin '__builtin_ia32_movntdq' [-Wimplicit-function-declaration]
>         __builtin_ia32_movntdq((__v2di *)pp_addr, (__v2di)src_data0);
>         ^
> 2 errors generated.
> [2125/2479] Generating rte_common_sfc_efx.sym_chk with a meson_exe.py
> custom command
> ninja: build stopped: subcommand failed.
> 
> 
> 
> 
> > ---
> >  drivers/event/dlb2/dlb2.c                      |  532 +++++
> >  drivers/event/dlb2/dlb2_iface.c                |   28 +
> >  drivers/event/dlb2/dlb2_iface.h                |   29 +
> >  drivers/event/dlb2/meson.build                 |    6 +-
> >  drivers/event/dlb2/pf/base/dlb2_hw_types.h     |  367 ++++
> >  drivers/event/dlb2/pf/base/dlb2_mbox.h         |  596 ++++++
> >  drivers/event/dlb2/pf/base/dlb2_osdep.h        |  247 +++
> >  drivers/event/dlb2/pf/base/dlb2_osdep_bitmap.h |  440 +++++
> >  drivers/event/dlb2/pf/base/dlb2_osdep_list.h   |  131 ++
> >  drivers/event/dlb2/pf/base/dlb2_osdep_types.h  |   31 +
> >  drivers/event/dlb2/pf/base/dlb2_regs.h         | 2527
> ++++++++++++++++++++++++
> >  drivers/event/dlb2/pf/base/dlb2_resource.c     |  274 +++
> >  drivers/event/dlb2/pf/base/dlb2_resource.h     | 1913 ++++++++++++++++++
> >  drivers/event/dlb2/pf/dlb2_main.c              |  615 ++++++
> >  drivers/event/dlb2/pf/dlb2_main.h              |  106 +
> >  drivers/event/dlb2/pf/dlb2_pf.c                |  244 +++
> >  16 files changed, 8085 insertions(+), 1 deletion(-)
> >  create mode 100644 drivers/event/dlb2/dlb2.c
> >  create mode 100644 drivers/event/dlb2/dlb2_iface.c
> >  create mode 100644 drivers/event/dlb2/dlb2_iface.h
> >  create mode 100644 drivers/event/dlb2/pf/base/dlb2_hw_types.h
> >  create mode 100644 drivers/event/dlb2/pf/base/dlb2_mbox.h
> >  create mode 100644 drivers/event/dlb2/pf/base/dlb2_osdep.h
> >  create mode 100644 drivers/event/dlb2/pf/base/dlb2_osdep_bitmap.h
> >  create mode 100644 drivers/event/dlb2/pf/base/dlb2_osdep_list.h
> >  create mode 100644 drivers/event/dlb2/pf/base/dlb2_osdep_types.h
> >  create mode 100644 drivers/event/dlb2/pf/base/dlb2_regs.h
> >  create mode 100644 drivers/event/dlb2/pf/base/dlb2_resource.c
> >  create mode 100644 drivers/event/dlb2/pf/base/dlb2_resource.h
> >  create mode 100644 drivers/event/dlb2/pf/dlb2_main.c
> >  create mode 100644 drivers/event/dlb2/pf/dlb2_main.h
> >  create mode 100644 drivers/event/dlb2/pf/dlb2_pf.c
> >
> > diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
> > new file mode 100644
> > index 0000000..26985b9
> > --- /dev/null
> > +++ b/drivers/event/dlb2/dlb2.c
> > @@ -0,0 +1,532 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2016-2020 Intel Corporation
> > + */
> > +
> > +#include <assert.h>
> > +#include <errno.h>
> > +#include <nmmintrin.h>
> > +#include <pthread.h>
> > +#include <stdint.h>
> > +#include <stdbool.h>
> > +#include <stdio.h>
> > +#include <string.h>
> > +#include <sys/mman.h>
> > +#include <sys/fcntl.h>
> > +
> > +#include <rte_common.h>
> > +#include <rte_config.h>
> > +#include <rte_cycles.h>
> > +#include <rte_debug.h>
> > +#include <rte_dev.h>
> > +#include <rte_errno.h>
> > +#include <rte_eventdev.h>
> > +#include <rte_eventdev_pmd.h>
> > +#include <rte_io.h>
> > +#include <rte_kvargs.h>
> > +#include <rte_log.h>
> > +#include <rte_malloc.h>
> > +#include <rte_mbuf.h>
> > +#include <rte_prefetch.h>
> > +#include <rte_ring.h>
> > +#include <rte_string_fns.h>
> > +
> > +#include "dlb2_priv.h"
> > +#include "dlb2_iface.h"
> > +#include "dlb2_inline_fns.h"
> > +
> > +#if !defined RTE_ARCH_X86_64
> > +#error "This implementation only supports RTE_ARCH_X86_64 architecture."
> > +#endif
> > +
> > +/*
> > + * Resources exposed to eventdev. Some values overridden at runtime using
> > + * values returned by the DLB kernel driver.
> > + */
> > +#if (RTE_EVENT_MAX_QUEUES_PER_DEV > UINT8_MAX)
> > +#error "RTE_EVENT_MAX_QUEUES_PER_DEV cannot fit in member
> max_event_queues"
> > +#endif
> > +static struct rte_event_dev_info evdev_dlb2_default_info = {
> > +       .driver_name = "", /* probe will set */
> > +       .min_dequeue_timeout_ns = DLB2_MIN_DEQUEUE_TIMEOUT_NS,
> > +       .max_dequeue_timeout_ns = DLB2_MAX_DEQUEUE_TIMEOUT_NS,
> > +#if (RTE_EVENT_MAX_QUEUES_PER_DEV < DLB2_MAX_NUM_LDB_QUEUES)
> > +       .max_event_queues = RTE_EVENT_MAX_QUEUES_PER_DEV,
> > +#else
> > +       .max_event_queues = DLB2_MAX_NUM_LDB_QUEUES,
> > +#endif
> > +       .max_event_queue_flows = DLB2_MAX_NUM_FLOWS,
> > +       .max_event_queue_priority_levels = DLB2_QID_PRIORITIES,
> > +       .max_event_priority_levels = DLB2_QID_PRIORITIES,
> > +       .max_event_ports = DLB2_MAX_NUM_LDB_PORTS,
> > +       .max_event_port_dequeue_depth = DLB2_MAX_CQ_DEPTH,
> > +       .max_event_port_enqueue_depth = DLB2_MAX_ENQUEUE_DEPTH,
> > +       .max_event_port_links = DLB2_MAX_NUM_QIDS_PER_LDB_CQ,
> > +       .max_num_events = DLB2_MAX_NUM_LDB_CREDITS,
> > +       .max_single_link_event_port_queue_pairs =
> DLB2_MAX_NUM_DIR_PORTS,
> > +       .event_dev_cap = (RTE_EVENT_DEV_CAP_QUEUE_QOS |
> > +                         RTE_EVENT_DEV_CAP_EVENT_QOS |
> > +                         RTE_EVENT_DEV_CAP_BURST_MODE |
> > +                         RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
> > +                         RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE |
> > +                         RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES),
> > +};
> > +
> > +struct process_local_port_data
> > +dlb2_port[DLB2_MAX_NUM_PORTS][DLB2_NUM_PORT_TYPES];
> > +
> > +/* override defaults with value(s) provided on command line */
> > +static void
> > +dlb2_init_queue_depth_thresholds(struct dlb2_eventdev *dlb2,
> > +                                int *qid_depth_thresholds)
> > +{
> > +       int q;
> > +
> > +       for (q = 0; q < DLB2_MAX_NUM_QUEUES; q++) {
> > +               if (qid_depth_thresholds[q] != 0)
> > +                       dlb2->ev_queues[q].depth_threshold =
> > +                               qid_depth_thresholds[q];
> > +       }
> > +}
> > +
> > +static int
> > +dlb2_hw_query_resources(struct dlb2_eventdev *dlb2)
> > +{
> > +       struct dlb2_hw_dev *handle = &dlb2->qm_instance;
> > +       struct dlb2_hw_resource_info *dlb2_info = &handle->info;
> > +       int ret;
> > +
> > +       /* Query driver resources provisioned for this device */
> > +
> > +       ret = dlb2_iface_get_num_resources(handle,
> > +                                          &dlb2->hw_rsrc_query_results);
> > +       if (ret) {
> > +               DLB2_LOG_ERR("ioctl get dlb2 num resources, err=%d\n", ret);
> > +               return ret;
> > +       }
> > +
> > +       /* Complete filling in device resource info returned to evdev app,
> > +        * overriding any default values.
> > +        * The capabilities (CAPs) were set at compile time.
> > +        */
> > +
> > +       evdev_dlb2_default_info.max_event_queues =
> > +               dlb2->hw_rsrc_query_results.num_ldb_queues;
> > +
> > +       evdev_dlb2_default_info.max_event_ports =
> > +               dlb2->hw_rsrc_query_results.num_ldb_ports;
> > +
> > +       evdev_dlb2_default_info.max_num_events =
> > +               dlb2->hw_rsrc_query_results.num_ldb_credits;
> > +
> > +       /* Save off values used when creating the scheduling domain. */
> > +
> > +       handle->info.num_sched_domains =
> > +               dlb2->hw_rsrc_query_results.num_sched_domains;
> > +
> > +       handle->info.hw_rsrc_max.nb_events_limit =
> > +               dlb2->hw_rsrc_query_results.num_ldb_credits;
> > +
> > +       handle->info.hw_rsrc_max.num_queues =
> > +               dlb2->hw_rsrc_query_results.num_ldb_queues +
> > +               dlb2->hw_rsrc_query_results.num_dir_ports;
> > +
> > +       handle->info.hw_rsrc_max.num_ldb_queues =
> > +               dlb2->hw_rsrc_query_results.num_ldb_queues;
> > +
> > +       handle->info.hw_rsrc_max.num_ldb_ports =
> > +               dlb2->hw_rsrc_query_results.num_ldb_ports;
> > +
> > +       handle->info.hw_rsrc_max.num_dir_ports =
> > +               dlb2->hw_rsrc_query_results.num_dir_ports;
> > +
> > +       handle->info.hw_rsrc_max.reorder_window_size =
> > +               dlb2->hw_rsrc_query_results.num_hist_list_entries;
> > +
> > +       rte_memcpy(dlb2_info, &handle->info.hw_rsrc_max, sizeof(*dlb2_info));
> > +
> > +       return 0;
> > +}
> > +
> > +#define RTE_BASE_10 10
> > +
> > +static int
> > +dlb2_string_to_int(int *result, const char *str)
> > +{
> > +       long ret;
> > +       char *endptr;
> > +
> > +       if (str == NULL || result == NULL)
> > +               return -EINVAL;
> > +
> > +       errno = 0;
> > +       ret = strtol(str, &endptr, RTE_BASE_10);
> > +       if (errno)
> > +               return -errno;
> > +
> > +       /* long int and int may be different width for some architectures */
> > +       if (ret < INT_MIN || ret > INT_MAX || endptr == str)
> > +               return -EINVAL;
> > +
> > +       *result = ret;
> > +       return 0;
> > +}
> > +
> > +static int
> > +set_numa_node(const char *key __rte_unused, const char *value, void
> *opaque)
> > +{
> > +       int *socket_id = opaque;
> > +       int ret;
> > +
> > +       ret = dlb2_string_to_int(socket_id, value);
> > +       if (ret < 0)
> > +               return ret;
> > +
> > +       if (*socket_id > RTE_MAX_NUMA_NODES)
> > +               return -EINVAL;
> > +       return 0;
> > +}
> > +
> > +static int
> > +set_max_num_events(const char *key __rte_unused,
> > +                  const char *value,
> > +                  void *opaque)
> > +{
> > +       int *max_num_events = opaque;
> > +       int ret;
> > +
> > +       if (value == NULL || opaque == NULL) {
> > +               DLB2_LOG_ERR("NULL pointer\n");
> > +               return -EINVAL;
> > +       }
> > +
> > +       ret = dlb2_string_to_int(max_num_events, value);
> > +       if (ret < 0)
> > +               return ret;
> > +
> > +       if (*max_num_events < 0 || *max_num_events >
> > +                       DLB2_MAX_NUM_LDB_CREDITS) {
> > +               DLB2_LOG_ERR("dlb2: max_num_events must be between 0 and
> %d\n",
> > +                            DLB2_MAX_NUM_LDB_CREDITS);
> > +               return -EINVAL;
> > +       }
> > +
> > +       return 0;
> > +}
> > +
> > +static int
> > +set_num_dir_credits(const char *key __rte_unused,
> > +                   const char *value,
> > +                   void *opaque)
> > +{
> > +       int *num_dir_credits = opaque;
> > +       int ret;
> > +
> > +       if (value == NULL || opaque == NULL) {
> > +               DLB2_LOG_ERR("NULL pointer\n");
> > +               return -EINVAL;
> > +       }
> > +
> > +       ret = dlb2_string_to_int(num_dir_credits, value);
> > +       if (ret < 0)
> > +               return ret;
> > +
> > +       if (*num_dir_credits < 0 ||
> > +           *num_dir_credits > DLB2_MAX_NUM_DIR_CREDITS) {
> > +               DLB2_LOG_ERR("dlb2: num_dir_credits must be between 0 and
> %d\n",
> > +                            DLB2_MAX_NUM_DIR_CREDITS);
> > +               return -EINVAL;
> > +       }
> > +
> > +       return 0;
> > +}
> > +
> > +static int
> > +set_dev_id(const char *key __rte_unused,
> > +          const char *value,
> > +          void *opaque)
> > +{
> > +       int *dev_id = opaque;
> > +       int ret;
> > +
> > +       if (value == NULL || opaque == NULL) {
> > +               DLB2_LOG_ERR("NULL pointer\n");
> > +               return -EINVAL;
> > +       }
> > +
> > +       ret = dlb2_string_to_int(dev_id, value);
> > +       if (ret < 0)
> > +               return ret;
> > +
> > +       return 0;
> > +}
> > +
> > +static int
> > +set_cos(const char *key __rte_unused,
> > +       const char *value,
> > +       void *opaque)
> > +{
> > +       enum dlb2_cos *cos_id = opaque;
> > +       int x = 0;
> > +       int ret;
> > +
> > +       if (value == NULL || opaque == NULL) {
> > +               DLB2_LOG_ERR("NULL pointer\n");
> > +               return -EINVAL;
> > +       }
> > +
> > +       ret = dlb2_string_to_int(&x, value);
> > +       if (ret < 0)
> > +               return ret;
> > +
> > +       if (x != DLB2_COS_DEFAULT && (x < DLB2_COS_0 || x > DLB2_COS_3)) {
> > +               DLB2_LOG_ERR(
> > +                       "COS %d out of range, must be DLB2_COS_DEFAULT or 0-3\n",
> > +                       x);
> > +               return -EINVAL;
> > +       }
> > +
> > +       *cos_id = x;
> > +
> > +       return 0;
> > +}
> > +
> > +
> > +static int
> > +set_qid_depth_thresh(const char *key __rte_unused,
> > +                    const char *value,
> > +                    void *opaque)
> > +{
> > +       struct dlb2_qid_depth_thresholds *qid_thresh = opaque;
> > +       int first, last, thresh, i;
> > +
> > +       if (value == NULL || opaque == NULL) {
> > +               DLB2_LOG_ERR("NULL pointer\n");
> > +               return -EINVAL;
> > +       }
> > +
> > +       /* command line override may take one of the following 3 forms:
> > +        * qid_depth_thresh=all:<threshold_value> ... all queues
> > +        * qid_depth_thresh=qidA-qidB:<threshold_value> ... a range of queues
> > +        * qid_depth_thresh=qid:<threshold_value> ... just one queue
> > +        */
> > +       if (sscanf(value, "all:%d", &thresh) == 1) {
> > +               first = 0;
> > +               last = DLB2_MAX_NUM_QUEUES - 1;
> > +       } else if (sscanf(value, "%d-%d:%d", &first, &last, &thresh) == 3) {
> > +               /* we have everything we need */
> > +       } else if (sscanf(value, "%d:%d", &first, &thresh) == 2) {
> > +               last = first;
> > +       } else {
> > +               DLB2_LOG_ERR("Error parsing qid depth devarg. Should be all:val,
> qid-qid:val, or qid:val\n");
> > +               return -EINVAL;
> > +       }
> > +
> > +       if (first > last || first < 0 || last >= DLB2_MAX_NUM_QUEUES) {
> > +               DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value\n");
> > +               return -EINVAL;
> > +       }
> > +
> > +       if (thresh < 0 || thresh > DLB2_MAX_QUEUE_DEPTH_THRESHOLD) {
> > +               DLB2_LOG_ERR("Error parsing qid depth devarg, threshold > %d\n",
> > +                            DLB2_MAX_QUEUE_DEPTH_THRESHOLD);
> > +               return -EINVAL;
> > +       }
> > +
> > +       for (i = first; i <= last; i++)
> > +               qid_thresh->val[i] = thresh; /* indexed by qid */
> > +
> > +       return 0;
> > +}
> > +
> > +static void
> > +dlb2_entry_points_init(struct rte_eventdev *dev)
> > +{
> > +       RTE_SET_USED(dev);
> > +
> > +       /* Eventdev PMD entry points */
> > +}
> > +
> > +int
> > +dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
> > +                           const char *name,
> > +                           struct dlb2_devargs *dlb2_args)
> > +{
> > +       struct dlb2_eventdev *dlb2;
> > +       int err;
> > +
> > +       dlb2 = dev->data->dev_private;
> > +
> > +       dlb2->event_dev = dev; /* backlink */
> > +
> > +       evdev_dlb2_default_info.driver_name = name;
> > +
> > +       dlb2->max_num_events_override = dlb2_args->max_num_events;
> > +       dlb2->num_dir_credits_override = dlb2_args->num_dir_credits_override;
> > +       dlb2->qm_instance.cos_id = dlb2_args->cos_id;
> > +
> > +       err = dlb2_iface_open(&dlb2->qm_instance, name);
> > +       if (err < 0) {
> > +               DLB2_LOG_ERR("could not open event hardware device, err=%d\n",
> > +                            err);
> > +               return err;
> > +       }
> > +
> > +       err = dlb2_iface_get_device_version(&dlb2->qm_instance,
> > +                                           &dlb2->revision);
> > +       if (err < 0) {
> > +               DLB2_LOG_ERR("dlb2: failed to get the device version, err=%d\n",
> > +                            err);
> > +               return err;
> > +       }
> > +
> > +       err = dlb2_hw_query_resources(dlb2);
> > +       if (err) {
> > +               DLB2_LOG_ERR("get resources err=%d for %s\n",
> > +                            err, name);
> > +               return err;
> > +       }
> > +
> > +       dlb2_iface_hardware_init(&dlb2->qm_instance);
> > +
> > +       err = dlb2_iface_get_cq_poll_mode(&dlb2->qm_instance, &dlb2-
> >poll_mode);
> > +       if (err < 0) {
> > +               DLB2_LOG_ERR("dlb2: failed to get the poll mode, err=%d\n",
> > +                            err);
> > +               return err;
> > +       }
> > +
> > +       rte_spinlock_init(&dlb2->qm_instance.resource_lock);
> > +
> > +       dlb2_iface_low_level_io_init();
> > +
> > +       dlb2_entry_points_init(dev);
> > +
> > +       dlb2_init_queue_depth_thresholds(dlb2,
> > +                                        dlb2_args->qid_depth_thresholds.val);
> > +
> > +       return 0;
> > +}
> > +
> > +int
> > +dlb2_secondary_eventdev_probe(struct rte_eventdev *dev,
> > +                             const char *name)
> > +{
> > +       struct dlb2_eventdev *dlb2;
> > +       int err;
> > +
> > +       dlb2 = dev->data->dev_private;
> > +
> > +       evdev_dlb2_default_info.driver_name = name;
> > +
> > +       err = dlb2_iface_open(&dlb2->qm_instance, name);
> > +       if (err < 0) {
> > +               DLB2_LOG_ERR("could not open event hardware device, err=%d\n",
> > +                            err);
> > +               return err;
> > +       }
> > +
> > +       err = dlb2_hw_query_resources(dlb2);
> > +       if (err) {
> > +               DLB2_LOG_ERR("get resources err=%d for %s\n",
> > +                            err, name);
> > +               return err;
> > +       }
> > +
> > +       dlb2_iface_low_level_io_init();
> > +
> > +       dlb2_entry_points_init(dev);
> > +
> > +       return 0;
> > +}
> > +
> > +int
> > +dlb2_parse_params(const char *params,
> > +                 const char *name,
> > +                 struct dlb2_devargs *dlb2_args)
> > +{
> > +       int ret = 0;
> > +       static const char * const args[] = { NUMA_NODE_ARG,
> > +                                            DLB2_MAX_NUM_EVENTS,
> > +                                            DLB2_NUM_DIR_CREDITS,
> > +                                            DEV_ID_ARG,
> > +                                            DLB2_QID_DEPTH_THRESH_ARG,
> > +                                            DLB2_COS_ARG,
> > +                                            NULL };
> > +
> > +       if (params != NULL && params[0] != '\0') {
> > +               struct rte_kvargs *kvlist = rte_kvargs_parse(params, args);
> > +
> > +               if (kvlist == NULL) {
> > +                       RTE_LOG(INFO, PMD,
> > +                               "Ignoring unsupported parameters when creating device
> '%s'\n",
> > +                               name);
> > +               } else {
> > +                       int ret = rte_kvargs_process(kvlist, NUMA_NODE_ARG,
> > +                                                    set_numa_node,
> > +                                                    &dlb2_args->socket_id);
> > +                       if (ret != 0) {
> > +                               DLB2_LOG_ERR("%s: Error parsing numa node parameter",
> > +                                            name);
> > +                               rte_kvargs_free(kvlist);
> > +                               return ret;
> > +                       }
> > +
> > +                       ret = rte_kvargs_process(kvlist, DLB2_MAX_NUM_EVENTS,
> > +                                                set_max_num_events,
> > +                                                &dlb2_args->max_num_events);
> > +                       if (ret != 0) {
> > +                               DLB2_LOG_ERR("%s: Error parsing max_num_events
> parameter",
> > +                                            name);
> > +                               rte_kvargs_free(kvlist);
> > +                               return ret;
> > +                       }
> > +
> > +                       ret = rte_kvargs_process(kvlist,
> > +                                       DLB2_NUM_DIR_CREDITS,
> > +                                       set_num_dir_credits,
> > +                                       &dlb2_args->num_dir_credits_override);
> > +                       if (ret != 0) {
> > +                               DLB2_LOG_ERR("%s: Error parsing num_dir_credits
> parameter",
> > +                                            name);
> > +                               rte_kvargs_free(kvlist);
> > +                               return ret;
> > +                       }
> > +
> > +                       ret = rte_kvargs_process(kvlist, DEV_ID_ARG,
> > +                                                set_dev_id,
> > +                                                &dlb2_args->dev_id);
> > +                       if (ret != 0) {
> > +                               DLB2_LOG_ERR("%s: Error parsing dev_id parameter",
> > +                                            name);
> > +                               rte_kvargs_free(kvlist);
> > +                               return ret;
> > +                       }
> > +
> > +                       ret = rte_kvargs_process(
> > +                                       kvlist,
> > +                                       DLB2_QID_DEPTH_THRESH_ARG,
> > +                                       set_qid_depth_thresh,
> > +                                       &dlb2_args->qid_depth_thresholds);
> > +                       if (ret != 0) {
> > +                               DLB2_LOG_ERR("%s: Error parsing qid_depth_thresh
> parameter",
> > +                                            name);
> > +                               rte_kvargs_free(kvlist);
> > +                               return ret;
> > +                       }
> > +
> > +                       ret = rte_kvargs_process(kvlist, DLB2_COS_ARG,
> > +                                                set_cos,
> > +                                                &dlb2_args->cos_id);
> > +                       if (ret != 0) {
> > +                               DLB2_LOG_ERR("%s: Error parsing cos parameter",
> > +                                            name);
> > +                               rte_kvargs_free(kvlist);
> > +                               return ret;
> > +                       }
> > +
> > +                       rte_kvargs_free(kvlist);
> > +               }
> > +       }
> > +       return ret;
> > +}
> > +RTE_LOG_REGISTER(eventdev_dlb2_log_level, pmd.event.dlb2, NOTICE);
> > diff --git a/drivers/event/dlb2/dlb2_iface.c b/drivers/event/dlb2/dlb2_iface.c
> > new file mode 100644
> > index 0000000..0d93faf
> > --- /dev/null
> > +++ b/drivers/event/dlb2/dlb2_iface.c
> > @@ -0,0 +1,28 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2016-2020 Intel Corporation
> > + */
> > +
> > +#include <stdint.h>
> > +
> > +#include "dlb2_priv.h"
> > +
> > +/* DLB2 PMD Internal interface function pointers.
> > + * If VDEV (bifurcated PMD),  these will resolve to functions that issue ioctls
> > + * serviced by DLB kernel module.
> > + * If PCI (PF PMD),  these will be implemented locally in user mode.
> > + */
> > +
> > +void (*dlb2_iface_low_level_io_init)(void);
> > +
> > +int (*dlb2_iface_open)(struct dlb2_hw_dev *handle, const char *name);
> > +
> > +int (*dlb2_iface_get_device_version)(struct dlb2_hw_dev *handle,
> > +                                    uint8_t *revision);
> > +
> > +void (*dlb2_iface_hardware_init)(struct dlb2_hw_dev *handle);
> > +
> > +int (*dlb2_iface_get_cq_poll_mode)(struct dlb2_hw_dev *handle,
> > +                                  enum dlb2_cq_poll_modes *mode);
> > +
> > +int (*dlb2_iface_get_num_resources)(struct dlb2_hw_dev *handle,
> > +                                   struct dlb2_get_num_resources_args *rsrcs);
> > diff --git a/drivers/event/dlb2/dlb2_iface.h b/drivers/event/dlb2/dlb2_iface.h
> > new file mode 100644
> > index 0000000..4fb416e
> > --- /dev/null
> > +++ b/drivers/event/dlb2/dlb2_iface.h
> > @@ -0,0 +1,29 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2016-2020 Intel Corporation
> > + */
> > +
> > +#ifndef _DLB2_IFACE_H_
> > +#define _DLB2_IFACE_H_
> > +
> > +/* DLB2 PMD Internal interface function pointers.
> > + * If VDEV (bifurcated PMD),  these will resolve to functions that issue ioctls
> > + * serviced by DLB kernel module.
> > + * If PCI (PF PMD),  these will be implemented locally in user mode.
> > + */
> > +
> > +extern void (*dlb2_iface_low_level_io_init)(void);
> > +
> > +extern int (*dlb2_iface_open)(struct dlb2_hw_dev *handle, const char
> *name);
> > +
> > +extern int (*dlb2_iface_get_device_version)(struct dlb2_hw_dev *handle,
> > +                                           uint8_t *revision);
> > +
> > +extern void (*dlb2_iface_hardware_init)(struct dlb2_hw_dev *handle);
> > +
> > +extern int (*dlb2_iface_get_cq_poll_mode)(struct dlb2_hw_dev *handle,
> > +                                         enum dlb2_cq_poll_modes *mode);
> > +
> > +extern int (*dlb2_iface_get_num_resources)(struct dlb2_hw_dev *handle,
> > +                               struct dlb2_get_num_resources_args *rsrcs);
> > +
> > +#endif /* _DLB2_IFACE_H_ */
> > diff --git a/drivers/event/dlb2/meson.build b/drivers/event/dlb2/meson.build
> > index 54ba2c8..99b71f9 100644
> > --- a/drivers/event/dlb2/meson.build
> > +++ b/drivers/event/dlb2/meson.build
> > @@ -1,7 +1,11 @@
> >  # SPDX-License-Identifier: BSD-3-Clause
> >  # Copyright(c) 2019-2020 Intel Corporation
> >
> > -sources = files(
> > +sources = files('dlb2.c',
> > +               'dlb2_iface.c',
> > +               'pf/dlb2_main.c',
> > +               'pf/dlb2_pf.c',
> > +               'pf/base/dlb2_resource.c'
> >  )
> >
> >  deps += ['mbuf', 'mempool', 'ring', 'pci', 'bus_pci']
> > diff --git a/drivers/event/dlb2/pf/base/dlb2_hw_types.h
> b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
> > new file mode 100644
> > index 0000000..428a5e8
> > --- /dev/null
> > +++ b/drivers/event/dlb2/pf/base/dlb2_hw_types.h
> > @@ -0,0 +1,367 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2016-2020 Intel Corporation
> > + */
> > +
> > +#ifndef __DLB2_HW_TYPES_H
> > +#define __DLB2_HW_TYPES_H
> > +
> > +#include "dlb2_user.h"
> > +
> > +#include "dlb2_osdep_list.h"
> > +#include "dlb2_osdep_types.h"
> > +
> > +#define DLB2_MAX_NUM_VDEVS                     16
> > +#define DLB2_MAX_NUM_DOMAINS                   32
> > +#define DLB2_MAX_NUM_LDB_QUEUES                        32 /* LDB == load-
> balanced */
> > +#define DLB2_MAX_NUM_DIR_QUEUES                        64 /* DIR == directed */
> > +#define DLB2_MAX_NUM_LDB_PORTS                 64
> > +#define DLB2_MAX_NUM_DIR_PORTS                 64
> > +#define DLB2_MAX_NUM_LDB_CREDITS               (8 * 1024)
> > +#define DLB2_MAX_NUM_DIR_CREDITS               (2 * 1024)
> > +#define DLB2_MAX_NUM_HIST_LIST_ENTRIES         2048
> > +#define DLB2_MAX_NUM_AQED_ENTRIES              2048
> > +#define DLB2_MAX_NUM_QIDS_PER_LDB_CQ           8
> > +#define DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS    2
> > +#define DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES     5
> > +#define DLB2_QID_PRIORITIES                    8
> > +#define DLB2_NUM_ARB_WEIGHTS                   8
> > +#define DLB2_MAX_WEIGHT                                255
> > +#define DLB2_NUM_COS_DOMAINS                   4
> > +#define DLB2_MAX_CQ_COMP_CHECK_LOOPS           409600
> > +#define DLB2_MAX_QID_EMPTY_CHECK_LOOPS         (32 * 64 * 1024 * (800 /
> 30))
> > +#ifdef FPGA
> > +#define DLB2_HZ                                        2000000
> > +#else
> > +#define DLB2_HZ                                        800000000
> > +#endif
> > +
> > +#define PCI_DEVICE_ID_INTEL_DLB2_PF 0x2710
> > +#define PCI_DEVICE_ID_INTEL_DLB2_VF 0x2711
> > +
> > +/* Interrupt related macros */
> > +#define DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS 1
> > +#define DLB2_PF_NUM_CQ_INTERRUPT_VECTORS     64
> > +#define DLB2_PF_TOTAL_NUM_INTERRUPT_VECTORS \
> > +       (DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS + \
> > +        DLB2_PF_NUM_CQ_INTERRUPT_VECTORS)
> > +#define DLB2_PF_NUM_COMPRESSED_MODE_VECTORS \
> > +       (DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS + 1)
> > +#define DLB2_PF_NUM_PACKED_MODE_VECTORS \
> > +       DLB2_PF_TOTAL_NUM_INTERRUPT_VECTORS
> > +#define DLB2_PF_COMPRESSED_MODE_CQ_VECTOR_ID \
> > +       DLB2_PF_NUM_NON_CQ_INTERRUPT_VECTORS
> > +
> > +/* DLB non-CQ interrupts (alarm, mailbox, WDT) */
> > +#define DLB2_INT_NON_CQ 0
> > +
> > +#define DLB2_ALARM_HW_SOURCE_SYS 0
> > +#define DLB2_ALARM_HW_SOURCE_DLB 1
> > +
> > +#define DLB2_ALARM_HW_UNIT_CHP 4
> > +
> > +#define DLB2_ALARM_SYS_AID_ILLEGAL_QID         3
> > +#define DLB2_ALARM_SYS_AID_DISABLED_QID                4
> > +#define DLB2_ALARM_SYS_AID_ILLEGAL_HCW         5
> > +#define DLB2_ALARM_HW_CHP_AID_ILLEGAL_ENQ      1
> > +#define DLB2_ALARM_HW_CHP_AID_EXCESS_TOKEN_POPS 2
> > +
> > +#define DLB2_VF_NUM_NON_CQ_INTERRUPT_VECTORS 1
> > +#define DLB2_VF_NUM_CQ_INTERRUPT_VECTORS     31
> > +#define DLB2_VF_BASE_CQ_VECTOR_ID           0
> > +#define DLB2_VF_LAST_CQ_VECTOR_ID           30
> > +#define DLB2_VF_MBOX_VECTOR_ID              31
> > +#define DLB2_VF_TOTAL_NUM_INTERRUPT_VECTORS \
> > +       (DLB2_VF_NUM_NON_CQ_INTERRUPT_VECTORS + \
> > +        DLB2_VF_NUM_CQ_INTERRUPT_VECTORS)
> > +
> > +#define DLB2_VDEV_MAX_NUM_INTERRUPT_VECTORS
> (DLB2_MAX_NUM_LDB_PORTS + \
> > +                                            DLB2_MAX_NUM_DIR_PORTS + 1)
> > +
> > +/*
> > + * Hardware-defined base addresses. Those prefixed 'DLB2_DRV' are only
> used by
> > + * the PF driver.
> > + */
> > +#define DLB2_DRV_LDB_PP_BASE   0x2300000
> > +#define DLB2_DRV_LDB_PP_STRIDE 0x1000
> > +#define DLB2_DRV_LDB_PP_BOUND  (DLB2_DRV_LDB_PP_BASE + \
> > +                               DLB2_DRV_LDB_PP_STRIDE *
> DLB2_MAX_NUM_LDB_PORTS)
> > +#define DLB2_DRV_DIR_PP_BASE   0x2200000
> > +#define DLB2_DRV_DIR_PP_STRIDE 0x1000
> > +#define DLB2_DRV_DIR_PP_BOUND  (DLB2_DRV_DIR_PP_BASE + \
> > +                               DLB2_DRV_DIR_PP_STRIDE *
> DLB2_MAX_NUM_DIR_PORTS)
> > +#define DLB2_LDB_PP_BASE       0x2100000
> > +#define DLB2_LDB_PP_STRIDE     0x1000
> > +#define DLB2_LDB_PP_BOUND      (DLB2_LDB_PP_BASE + \
> > +                               DLB2_LDB_PP_STRIDE * DLB2_MAX_NUM_LDB_PORTS)
> > +#define DLB2_LDB_PP_OFFS(id)   (DLB2_LDB_PP_BASE + (id) * DLB2_PP_SIZE)
> > +#define DLB2_DIR_PP_BASE       0x2000000
> > +#define DLB2_DIR_PP_STRIDE     0x1000
> > +#define DLB2_DIR_PP_BOUND      (DLB2_DIR_PP_BASE + \
> > +                               DLB2_DIR_PP_STRIDE * DLB2_MAX_NUM_DIR_PORTS)
> > +#define DLB2_DIR_PP_OFFS(id)   (DLB2_DIR_PP_BASE + (id) * DLB2_PP_SIZE)
> > +
> > +struct dlb2_resource_id {
> > +       u32 phys_id;
> > +       u32 virt_id;
> > +       u8 vdev_owned;
> > +       u8 vdev_id;
> > +};
> > +
> > +struct dlb2_freelist {
> > +       u32 base;
> > +       u32 bound;
> > +       u32 offset;
> > +};
> > +
> > +static inline u32 dlb2_freelist_count(struct dlb2_freelist *list)
> > +{
> > +       return list->bound - list->base - list->offset;
> > +}
> > +
> > +struct dlb2_hcw {
> > +       u64 data;
> > +       /* Word 3 */
> > +       u16 opaque;
> > +       u8 qid;
> > +       u8 sched_type:2;
> > +       u8 priority:3;
> > +       u8 msg_type:3;
> > +       /* Word 4 */
> > +       u16 lock_id;
> > +       u8 ts_flag:1;
> > +       u8 rsvd1:2;
> > +       u8 no_dec:1;
> > +       u8 cmp_id:4;
> > +       u8 cq_token:1;
> > +       u8 qe_comp:1;
> > +       u8 qe_frag:1;
> > +       u8 qe_valid:1;
> > +       u8 int_arm:1;
> > +       u8 error:1;
> > +       u8 rsvd:2;
> > +};
> > +
> > +struct dlb2_ldb_queue {
> > +       struct dlb2_list_entry domain_list;
> > +       struct dlb2_list_entry func_list;
> > +       struct dlb2_resource_id id;
> > +       struct dlb2_resource_id domain_id;
> > +       u32 num_qid_inflights;
> > +       u32 aqed_limit;
> > +       u32 sn_group; /* sn == sequence number */
> > +       u32 sn_slot;
> > +       u32 num_mappings;
> > +       u8 sn_cfg_valid;
> > +       u8 num_pending_additions;
> > +       u8 owned;
> > +       u8 configured;
> > +};
> > +
> > +/*
> > + * Directed ports and queues are paired by nature, so the driver tracks them
> > + * with a single data structure.
> > + */
> > +struct dlb2_dir_pq_pair {
> > +       struct dlb2_list_entry domain_list;
> > +       struct dlb2_list_entry func_list;
> > +       struct dlb2_resource_id id;
> > +       struct dlb2_resource_id domain_id;
> > +       u32 ref_cnt;
> > +       u8 init_tkn_cnt;
> > +       u8 queue_configured;
> > +       u8 port_configured;
> > +       u8 owned;
> > +       u8 enabled;
> > +};
> > +
> > +enum dlb2_qid_map_state {
> > +       /* The slot doesn't contain a valid queue mapping */
> > +       DLB2_QUEUE_UNMAPPED,
> > +       /* The slot contains a valid queue mapping */
> > +       DLB2_QUEUE_MAPPED,
> > +       /* The driver is mapping a queue into this slot */
> > +       DLB2_QUEUE_MAP_IN_PROG,
> > +       /* The driver is unmapping a queue from this slot */
> > +       DLB2_QUEUE_UNMAP_IN_PROG,
> > +       /*
> > +        * The driver is unmapping a queue from this slot, and once complete
> > +        * will replace it with another mapping.
> > +        */
> > +       DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP,
> > +};
> > +
> > +struct dlb2_ldb_port_qid_map {
> > +       enum dlb2_qid_map_state state;
> > +       u16 qid;
> > +       u16 pending_qid;
> > +       u8 priority;
> > +       u8 pending_priority;
> > +};
> > +
> > +struct dlb2_ldb_port {
> > +       struct dlb2_list_entry domain_list;
> > +       struct dlb2_list_entry func_list;
> > +       struct dlb2_resource_id id;
> > +       struct dlb2_resource_id domain_id;
> > +       /* The qid_map represents the hardware QID mapping state. */
> > +       struct dlb2_ldb_port_qid_map
> qid_map[DLB2_MAX_NUM_QIDS_PER_LDB_CQ];
> > +       u32 hist_list_entry_base;
> > +       u32 hist_list_entry_limit;
> > +       u32 ref_cnt;
> > +       u8 init_tkn_cnt;
> > +       u8 num_pending_removals;
> > +       u8 num_mappings;
> > +       u8 owned;
> > +       u8 enabled;
> > +       u8 configured;
> > +};
> > +
> > +struct dlb2_sn_group {
> > +       u32 mode;
> > +       u32 sequence_numbers_per_queue;
> > +       u32 slot_use_bitmap;
> > +       u32 id;
> > +};
> > +
> > +static inline bool dlb2_sn_group_full(struct dlb2_sn_group *group)
> > +{
> > +       u32 mask[] = {
> > +               0x0000ffff,  /* 64 SNs per queue */
> > +               0x000000ff,  /* 128 SNs per queue */
> > +               0x0000000f,  /* 256 SNs per queue */
> > +               0x00000003,  /* 512 SNs per queue */
> > +               0x00000001}; /* 1024 SNs per queue */
> > +
> > +       return group->slot_use_bitmap == mask[group->mode];
> > +}
> > +
> > +static inline int dlb2_sn_group_alloc_slot(struct dlb2_sn_group *group)
> > +{
> > +       u32 bound[6] = {16, 8, 4, 2, 1};
> > +       u32 i;
> > +
> > +       for (i = 0; i < bound[group->mode]; i++) {
> > +               if (!(group->slot_use_bitmap & (1 << i))) {
> > +                       group->slot_use_bitmap |= 1 << i;
> > +                       return i;
> > +               }
> > +       }
> > +
> > +       return -1;
> > +}
> > +
> > +static inline void
> > +dlb2_sn_group_free_slot(struct dlb2_sn_group *group, int slot)
> > +{
> > +       group->slot_use_bitmap &= ~(1 << slot);
> > +}
> > +
> > +static inline int dlb2_sn_group_used_slots(struct dlb2_sn_group *group)
> > +{
> > +       int i, cnt = 0;
> > +
> > +       for (i = 0; i < 32; i++)
> > +               cnt += !!(group->slot_use_bitmap & (1 << i));
> > +
> > +       return cnt;
> > +}
> > +
> > +struct dlb2_hw_domain {
> > +       struct dlb2_function_resources *parent_func;
> > +       struct dlb2_list_entry func_list;
> > +       struct dlb2_list_head used_ldb_queues;
> > +       struct dlb2_list_head used_ldb_ports[DLB2_NUM_COS_DOMAINS];
> > +       struct dlb2_list_head used_dir_pq_pairs;
> > +       struct dlb2_list_head avail_ldb_queues;
> > +       struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
> > +       struct dlb2_list_head avail_dir_pq_pairs;
> > +       u32 total_hist_list_entries;
> > +       u32 avail_hist_list_entries;
> > +       u32 hist_list_entry_base;
> > +       u32 hist_list_entry_offset;
> > +       u32 num_ldb_credits;
> > +       u32 num_dir_credits;
> > +       u32 num_avail_aqed_entries;
> > +       u32 num_used_aqed_entries;
> > +       struct dlb2_resource_id id;
> > +       int num_pending_removals;
> > +       int num_pending_additions;
> > +       u8 configured;
> > +       u8 started;
> > +};
> > +
> > +struct dlb2_bitmap;
> > +
> > +struct dlb2_function_resources {
> > +       struct dlb2_list_head avail_domains;
> > +       struct dlb2_list_head used_domains;
> > +       struct dlb2_list_head avail_ldb_queues;
> > +       struct dlb2_list_head avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
> > +       struct dlb2_list_head avail_dir_pq_pairs;
> > +       struct dlb2_bitmap *avail_hist_list_entries;
> > +       u32 num_avail_domains;
> > +       u32 num_avail_ldb_queues;
> > +       u32 num_avail_ldb_ports[DLB2_NUM_COS_DOMAINS];
> > +       u32 num_avail_dir_pq_pairs;
> > +       u32 num_avail_qed_entries;
> > +       u32 num_avail_dqed_entries;
> > +       u32 num_avail_aqed_entries;
> > +       u8 locked; /* (VDEV only) */
> > +};
> > +
> > +/*
> > + * After initialization, each resource in dlb2_hw_resources is located in one
> > + * of the following lists:
> > + * -- The PF's available resources list. These are unconfigured resources
> owned
> > + *     by the PF and not allocated to a dlb2 scheduling domain.
> > + * -- A VDEV's available resources list. These are VDEV-owned unconfigured
> > + *     resources not allocated to a dlb2 scheduling domain.
> > + * -- A domain's available resources list. These are domain-owned
> unconfigured
> > + *     resources.
> > + * -- A domain's used resources list. These are are domain-owned configured
> > + *     resources.
> > + *
> > + * A resource moves to a new list when a VDEV or domain is created or
> destroyed,
> > + * or when the resource is configured.
> > + */
> > +struct dlb2_hw_resources {
> > +       struct dlb2_ldb_queue ldb_queues[DLB2_MAX_NUM_LDB_QUEUES];
> > +       struct dlb2_ldb_port ldb_ports[DLB2_MAX_NUM_LDB_PORTS];
> > +       struct dlb2_dir_pq_pair dir_pq_pairs[DLB2_MAX_NUM_DIR_PORTS];
> > +       struct dlb2_sn_group
> sn_groups[DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS];
> > +};
> > +
> > +struct dlb2_mbox {
> > +       u32 *mbox;
> > +       u32 *isr_in_progress;
> > +};
> > +
> > +struct dlb2_sw_mbox {
> > +       struct dlb2_mbox vdev_to_pf;
> > +       struct dlb2_mbox pf_to_vdev;
> > +       void (*pf_to_vdev_inject)(void *arg);
> > +       void *pf_to_vdev_inject_arg;
> > +};
> > +
> > +struct dlb2_hw {
> > +       /* BAR 0 address */
> > +       void  *csr_kva;
> > +       unsigned long csr_phys_addr;
> > +       /* BAR 2 address */
> > +       void  *func_kva;
> > +       unsigned long func_phys_addr;
> > +
> > +       /* Resource tracking */
> > +       struct dlb2_hw_resources rsrcs;
> > +       struct dlb2_function_resources pf;
> > +       struct dlb2_function_resources vdev[DLB2_MAX_NUM_VDEVS];
> > +       struct dlb2_hw_domain domains[DLB2_MAX_NUM_DOMAINS];
> > +       u8 cos_reservation[DLB2_NUM_COS_DOMAINS];
> > +
> > +       /* Virtualization */
> > +       int virt_mode;
> > +       struct dlb2_sw_mbox mbox[DLB2_MAX_NUM_VDEVS];
> > +       unsigned int pasid[DLB2_MAX_NUM_VDEVS];
> > +};
> > +
> > +#endif /* __DLB2_HW_TYPES_H */
> > diff --git a/drivers/event/dlb2/pf/base/dlb2_mbox.h
> b/drivers/event/dlb2/pf/base/dlb2_mbox.h
> > new file mode 100644
> > index 0000000..ce462c0
> > --- /dev/null
> > +++ b/drivers/event/dlb2/pf/base/dlb2_mbox.h
> > @@ -0,0 +1,596 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2016-2020 Intel Corporation
> > + */
> > +
> > +#ifndef __DLB2_BASE_DLB2_MBOX_H
> > +#define __DLB2_BASE_DLB2_MBOX_H
> > +
> > +#include "dlb2_osdep_types.h"
> > +#include "dlb2_regs.h"
> > +
> > +#define DLB2_MBOX_INTERFACE_VERSION 1
> > +
> > +/*
> > + * The PF uses its PF->VF mailbox to send responses to VF requests, as well as
> > + * to send requests of its own (e.g. notifying a VF of an impending FLR).
> > + * To avoid communication race conditions, e.g. the PF sends a response and
> then
> > + * sends a request before the VF reads the response, the PF->VF mailbox is
> > + * divided into two sections:
> > + * - Bytes 0-47: PF responses
> > + * - Bytes 48-63: PF requests
> > + *
> > + * Partitioning the PF->VF mailbox allows responses and requests to occupy
> the
> > + * mailbox simultaneously.
> > + */
> > +#define DLB2_PF2VF_RESP_BYTES    48
> > +#define DLB2_PF2VF_RESP_BASE     0
> > +#define DLB2_PF2VF_RESP_BASE_WORD (DLB2_PF2VF_RESP_BASE / 4)
> > +
> > +#define DLB2_PF2VF_REQ_BYTES     16
> > +#define DLB2_PF2VF_REQ_BASE      (DLB2_PF2VF_RESP_BASE +
> DLB2_PF2VF_RESP_BYTES)
> > +#define DLB2_PF2VF_REQ_BASE_WORD  (DLB2_PF2VF_REQ_BASE / 4)
> > +
> > +/*
> > + * Similarly, the VF->PF mailbox is divided into two sections:
> > + * - Bytes 0-239: VF requests
> > + * -- (Bytes 0-3 are unused due to a hardware errata)
> > + * - Bytes 240-255: VF responses
> > + */
> > +#define DLB2_VF2PF_REQ_BYTES    236
> > +#define DLB2_VF2PF_REQ_BASE     4
> > +#define DLB2_VF2PF_REQ_BASE_WORD (DLB2_VF2PF_REQ_BASE / 4)
> > +
> > +#define DLB2_VF2PF_RESP_BYTES    16
> > +#define DLB2_VF2PF_RESP_BASE     (DLB2_VF2PF_REQ_BASE +
> DLB2_VF2PF_REQ_BYTES)
> > +#define DLB2_VF2PF_RESP_BASE_WORD (DLB2_VF2PF_RESP_BASE / 4)
> > +
> > +/* VF-initiated commands */
> > +enum dlb2_mbox_cmd_type {
> > +       DLB2_MBOX_CMD_REGISTER,
> > +       DLB2_MBOX_CMD_UNREGISTER,
> > +       DLB2_MBOX_CMD_GET_NUM_RESOURCES,
> > +       DLB2_MBOX_CMD_CREATE_SCHED_DOMAIN,
> > +       DLB2_MBOX_CMD_RESET_SCHED_DOMAIN,
> > +       DLB2_MBOX_CMD_CREATE_LDB_QUEUE,
> > +       DLB2_MBOX_CMD_CREATE_DIR_QUEUE,
> > +       DLB2_MBOX_CMD_CREATE_LDB_PORT,
> > +       DLB2_MBOX_CMD_CREATE_DIR_PORT,
> > +       DLB2_MBOX_CMD_ENABLE_LDB_PORT,
> > +       DLB2_MBOX_CMD_DISABLE_LDB_PORT,
> > +       DLB2_MBOX_CMD_ENABLE_DIR_PORT,
> > +       DLB2_MBOX_CMD_DISABLE_DIR_PORT,
> > +       DLB2_MBOX_CMD_LDB_PORT_OWNED_BY_DOMAIN,
> > +       DLB2_MBOX_CMD_DIR_PORT_OWNED_BY_DOMAIN,
> > +       DLB2_MBOX_CMD_MAP_QID,
> > +       DLB2_MBOX_CMD_UNMAP_QID,
> > +       DLB2_MBOX_CMD_START_DOMAIN,
> > +       DLB2_MBOX_CMD_ENABLE_LDB_PORT_INTR,
> > +       DLB2_MBOX_CMD_ENABLE_DIR_PORT_INTR,
> > +       DLB2_MBOX_CMD_ARM_CQ_INTR,
> > +       DLB2_MBOX_CMD_GET_NUM_USED_RESOURCES,
> > +       DLB2_MBOX_CMD_GET_SN_ALLOCATION,
> > +       DLB2_MBOX_CMD_GET_LDB_QUEUE_DEPTH,
> > +       DLB2_MBOX_CMD_GET_DIR_QUEUE_DEPTH,
> > +       DLB2_MBOX_CMD_PENDING_PORT_UNMAPS,
> > +       DLB2_MBOX_CMD_GET_COS_BW,
> > +       DLB2_MBOX_CMD_GET_SN_OCCUPANCY,
> > +       DLB2_MBOX_CMD_QUERY_CQ_POLL_MODE,
> > +
> > +       /* NUM_QE_CMD_TYPES must be last */
> > +       NUM_DLB2_MBOX_CMD_TYPES,
> > +};
> > +
> > +static const char dlb2_mbox_cmd_type_strings[][128] = {
> > +       "DLB2_MBOX_CMD_REGISTER",
> > +       "DLB2_MBOX_CMD_UNREGISTER",
> > +       "DLB2_MBOX_CMD_GET_NUM_RESOURCES",
> > +       "DLB2_MBOX_CMD_CREATE_SCHED_DOMAIN",
> > +       "DLB2_MBOX_CMD_RESET_SCHED_DOMAIN",
> > +       "DLB2_MBOX_CMD_CREATE_LDB_QUEUE",
> > +       "DLB2_MBOX_CMD_CREATE_DIR_QUEUE",
> > +       "DLB2_MBOX_CMD_CREATE_LDB_PORT",
> > +       "DLB2_MBOX_CMD_CREATE_DIR_PORT",
> > +       "DLB2_MBOX_CMD_ENABLE_LDB_PORT",
> > +       "DLB2_MBOX_CMD_DISABLE_LDB_PORT",
> > +       "DLB2_MBOX_CMD_ENABLE_DIR_PORT",
> > +       "DLB2_MBOX_CMD_DISABLE_DIR_PORT",
> > +       "DLB2_MBOX_CMD_LDB_PORT_OWNED_BY_DOMAIN",
> > +       "DLB2_MBOX_CMD_DIR_PORT_OWNED_BY_DOMAIN",
> > +       "DLB2_MBOX_CMD_MAP_QID",
> > +       "DLB2_MBOX_CMD_UNMAP_QID",
> > +       "DLB2_MBOX_CMD_START_DOMAIN",
> > +       "DLB2_MBOX_CMD_ENABLE_LDB_PORT_INTR",
> > +       "DLB2_MBOX_CMD_ENABLE_DIR_PORT_INTR",
> > +       "DLB2_MBOX_CMD_ARM_CQ_INTR",
> > +       "DLB2_MBOX_CMD_GET_NUM_USED_RESOURCES",
> > +       "DLB2_MBOX_CMD_GET_SN_ALLOCATION",
> > +       "DLB2_MBOX_CMD_GET_LDB_QUEUE_DEPTH",
> > +       "DLB2_MBOX_CMD_GET_DIR_QUEUE_DEPTH",
> > +       "DLB2_MBOX_CMD_PENDING_PORT_UNMAPS",
> > +       "DLB2_MBOX_CMD_GET_COS_BW",
> > +       "DLB2_MBOX_CMD_GET_SN_OCCUPANCY",
> > +       "DLB2_MBOX_CMD_QUERY_CQ_POLL_MODE",
> > +};
> > +
> > +/* PF-initiated commands */
> > +enum dlb2_mbox_vf_cmd_type {
> > +       DLB2_MBOX_VF_CMD_DOMAIN_ALERT,
> > +       DLB2_MBOX_VF_CMD_NOTIFICATION,
> > +       DLB2_MBOX_VF_CMD_IN_USE,
> > +
> > +       /* NUM_DLB2_MBOX_VF_CMD_TYPES must be last */
> > +       NUM_DLB2_MBOX_VF_CMD_TYPES,
> > +};
> > +
> > +static const char dlb2_mbox_vf_cmd_type_strings[][128] = {
> > +       "DLB2_MBOX_VF_CMD_DOMAIN_ALERT",
> > +       "DLB2_MBOX_VF_CMD_NOTIFICATION",
> > +       "DLB2_MBOX_VF_CMD_IN_USE",
> > +};
> > +
> > +#define DLB2_MBOX_CMD_TYPE(hdr) \
> > +       (((struct dlb2_mbox_req_hdr *)hdr)->type)
> > +#define DLB2_MBOX_CMD_STRING(hdr) \
> > +       dlb2_mbox_cmd_type_strings[DLB2_MBOX_CMD_TYPE(hdr)]
> > +
> > +enum dlb2_mbox_status_type {
> > +       DLB2_MBOX_ST_SUCCESS,
> > +       DLB2_MBOX_ST_INVALID_CMD_TYPE,
> > +       DLB2_MBOX_ST_VERSION_MISMATCH,
> > +       DLB2_MBOX_ST_INVALID_OWNER_VF,
> > +};
> > +
> > +static const char dlb2_mbox_status_type_strings[][128] = {
> > +       "DLB2_MBOX_ST_SUCCESS",
> > +       "DLB2_MBOX_ST_INVALID_CMD_TYPE",
> > +       "DLB2_MBOX_ST_VERSION_MISMATCH",
> > +       "DLB2_MBOX_ST_INVALID_OWNER_VF",
> > +};
> > +
> > +#define DLB2_MBOX_ST_TYPE(hdr) \
> > +       (((struct dlb2_mbox_resp_hdr *)hdr)->status)
> > +#define DLB2_MBOX_ST_STRING(hdr) \
> > +       dlb2_mbox_status_type_strings[DLB2_MBOX_ST_TYPE(hdr)]
> > +
> > +/* This structure is always the first field in a request structure */
> > +struct dlb2_mbox_req_hdr {
> > +       u32 type;
> > +};
> > +
> > +/* This structure is always the first field in a response structure */
> > +struct dlb2_mbox_resp_hdr {
> > +       u32 status;
> > +};
> > +
> > +struct dlb2_mbox_register_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u16 min_interface_version;
> > +       u16 max_interface_version;
> > +};
> > +
> > +struct dlb2_mbox_register_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 interface_version;
> > +       u8 pf_id;
> > +       u8 vf_id;
> > +       u8 is_auxiliary_vf;
> > +       u8 primary_vf_id;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_unregister_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_unregister_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_get_num_resources_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_get_num_resources_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 error_code;
> > +       u16 num_sched_domains;
> > +       u16 num_ldb_queues;
> > +       u16 num_ldb_ports;
> > +       u16 num_cos_ldb_ports[4];
> > +       u16 num_dir_ports;
> > +       u32 num_atomic_inflights;
> > +       u32 num_hist_list_entries;
> > +       u32 max_contiguous_hist_list_entries;
> > +       u16 num_ldb_credits;
> > +       u16 num_dir_credits;
> > +};
> > +
> > +struct dlb2_mbox_create_sched_domain_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 num_ldb_queues;
> > +       u32 num_ldb_ports;
> > +       u32 num_cos_ldb_ports[4];
> > +       u32 num_dir_ports;
> > +       u32 num_atomic_inflights;
> > +       u32 num_hist_list_entries;
> > +       u32 num_ldb_credits;
> > +       u32 num_dir_credits;
> > +       u8 cos_strict;
> > +       u8 padding0[3];
> > +       u32 padding1;
> > +};
> > +
> > +struct dlb2_mbox_create_sched_domain_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 error_code;
> > +       u32 status;
> > +       u32 id;
> > +};
> > +
> > +struct dlb2_mbox_reset_sched_domain_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 id;
> > +};
> > +
> > +struct dlb2_mbox_reset_sched_domain_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 error_code;
> > +};
> > +
> > +struct dlb2_mbox_create_ldb_queue_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 domain_id;
> > +       u32 num_sequence_numbers;
> > +       u32 num_qid_inflights;
> > +       u32 num_atomic_inflights;
> > +       u32 lock_id_comp_level;
> > +       u32 depth_threshold;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_create_ldb_queue_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 error_code;
> > +       u32 status;
> > +       u32 id;
> > +};
> > +
> > +struct dlb2_mbox_create_dir_queue_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 domain_id;
> > +       u32 port_id;
> > +       u32 depth_threshold;
> > +};
> > +
> > +struct dlb2_mbox_create_dir_queue_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 error_code;
> > +       u32 status;
> > +       u32 id;
> > +};
> > +
> > +struct dlb2_mbox_create_ldb_port_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 domain_id;
> > +       u16 cq_depth;
> > +       u16 cq_history_list_size;
> > +       u8 cos_id;
> > +       u8 cos_strict;
> > +       u16 padding1;
> > +       u64 cq_base_address;
> > +};
> > +
> > +struct dlb2_mbox_create_ldb_port_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 error_code;
> > +       u32 status;
> > +       u32 id;
> > +};
> > +
> > +struct dlb2_mbox_create_dir_port_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 domain_id;
> > +       u64 cq_base_address;
> > +       u16 cq_depth;
> > +       u16 padding0;
> > +       s32 queue_id;
> > +};
> > +
> > +struct dlb2_mbox_create_dir_port_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 error_code;
> > +       u32 status;
> > +       u32 id;
> > +};
> > +
> > +struct dlb2_mbox_enable_ldb_port_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 domain_id;
> > +       u32 port_id;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_enable_ldb_port_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 error_code;
> > +       u32 status;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_disable_ldb_port_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 domain_id;
> > +       u32 port_id;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_disable_ldb_port_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 error_code;
> > +       u32 status;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_enable_dir_port_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 domain_id;
> > +       u32 port_id;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_enable_dir_port_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 error_code;
> > +       u32 status;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_disable_dir_port_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 domain_id;
> > +       u32 port_id;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_disable_dir_port_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 error_code;
> > +       u32 status;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_ldb_port_owned_by_domain_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 domain_id;
> > +       u32 port_id;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_ldb_port_owned_by_domain_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       s32 owned;
> > +};
> > +
> > +struct dlb2_mbox_dir_port_owned_by_domain_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 domain_id;
> > +       u32 port_id;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_dir_port_owned_by_domain_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       s32 owned;
> > +};
> > +
> > +struct dlb2_mbox_map_qid_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 domain_id;
> > +       u32 port_id;
> > +       u32 qid;
> > +       u32 priority;
> > +       u32 padding0;
> > +};
> > +
> > +struct dlb2_mbox_map_qid_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 error_code;
> > +       u32 status;
> > +       u32 id;
> > +};
> > +
> > +struct dlb2_mbox_unmap_qid_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 domain_id;
> > +       u32 port_id;
> > +       u32 qid;
> > +};
> > +
> > +struct dlb2_mbox_unmap_qid_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 error_code;
> > +       u32 status;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_start_domain_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 domain_id;
> > +};
> > +
> > +struct dlb2_mbox_start_domain_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 error_code;
> > +       u32 status;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_enable_ldb_port_intr_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u16 port_id;
> > +       u16 thresh;
> > +       u16 vector;
> > +       u16 owner_vf;
> > +       u16 reserved[2];
> > +};
> > +
> > +struct dlb2_mbox_enable_ldb_port_intr_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 error_code;
> > +       u32 status;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_enable_dir_port_intr_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u16 port_id;
> > +       u16 thresh;
> > +       u16 vector;
> > +       u16 owner_vf;
> > +       u16 reserved[2];
> > +};
> > +
> > +struct dlb2_mbox_enable_dir_port_intr_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 error_code;
> > +       u32 status;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_arm_cq_intr_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 domain_id;
> > +       u32 port_id;
> > +       u32 is_ldb;
> > +};
> > +
> > +struct dlb2_mbox_arm_cq_intr_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 error_code;
> > +       u32 status;
> > +       u32 padding0;
> > +};
> > +
> > +/*
> > + * The alert_id and aux_alert_data follows the format of the alerts defined in
> > + * dlb2_types.h. The alert id contains an enum dlb2_domain_alert_id value,
> and
> > + * the aux_alert_data value varies depending on the alert.
> > + */
> > +struct dlb2_mbox_vf_alert_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 domain_id;
> > +       u32 alert_id;
> > +       u32 aux_alert_data;
> > +};
> > +
> > +enum dlb2_mbox_vf_notification_type {
> > +       DLB2_MBOX_VF_NOTIFICATION_PRE_RESET,
> > +       DLB2_MBOX_VF_NOTIFICATION_POST_RESET,
> > +
> > +       /* NUM_DLB2_MBOX_VF_NOTIFICATION_TYPES must be last */
> > +       NUM_DLB2_MBOX_VF_NOTIFICATION_TYPES,
> > +};
> > +
> > +struct dlb2_mbox_vf_notification_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 notification;
> > +};
> > +
> > +struct dlb2_mbox_vf_in_use_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_vf_in_use_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 in_use;
> > +};
> > +
> > +struct dlb2_mbox_get_sn_allocation_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 group_id;
> > +};
> > +
> > +struct dlb2_mbox_get_sn_allocation_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 num;
> > +};
> > +
> > +struct dlb2_mbox_get_ldb_queue_depth_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 domain_id;
> > +       u32 queue_id;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_get_ldb_queue_depth_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 error_code;
> > +       u32 status;
> > +       u32 depth;
> > +};
> > +
> > +struct dlb2_mbox_get_dir_queue_depth_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 domain_id;
> > +       u32 queue_id;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_get_dir_queue_depth_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 error_code;
> > +       u32 status;
> > +       u32 depth;
> > +};
> > +
> > +struct dlb2_mbox_pending_port_unmaps_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 domain_id;
> > +       u32 port_id;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_pending_port_unmaps_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 error_code;
> > +       u32 status;
> > +       u32 num;
> > +};
> > +
> > +struct dlb2_mbox_get_cos_bw_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 cos_id;
> > +};
> > +
> > +struct dlb2_mbox_get_cos_bw_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 num;
> > +};
> > +
> > +struct dlb2_mbox_get_sn_occupancy_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 group_id;
> > +};
> > +
> > +struct dlb2_mbox_get_sn_occupancy_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 num;
> > +};
> > +
> > +struct dlb2_mbox_query_cq_poll_mode_cmd_req {
> > +       struct dlb2_mbox_req_hdr hdr;
> > +       u32 padding;
> > +};
> > +
> > +struct dlb2_mbox_query_cq_poll_mode_cmd_resp {
> > +       struct dlb2_mbox_resp_hdr hdr;
> > +       u32 error_code;
> > +       u32 status;
> > +       u32 mode;
> > +};
> > +
> > +#endif /* __DLB2_BASE_DLB2_MBOX_H */
> > diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep.h
> b/drivers/event/dlb2/pf/base/dlb2_osdep.h
> > new file mode 100644
> > index 0000000..43f2125
> > --- /dev/null
> > +++ b/drivers/event/dlb2/pf/base/dlb2_osdep.h
> > @@ -0,0 +1,247 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2016-2020 Intel Corporation
> > + */
> > +
> > +#ifndef __DLB2_OSDEP_H
> > +#define __DLB2_OSDEP_H
> > +
> > +#include <string.h>
> > +#include <time.h>
> > +#include <unistd.h>
> > +#include <pthread.h>
> > +
> > +#include <rte_string_fns.h>
> > +#include <rte_cycles.h>
> > +#include <rte_io.h>
> > +#include <rte_log.h>
> > +#include <rte_spinlock.h>
> > +#include "../dlb2_main.h"
> > +#include "dlb2_resource.h"
> > +#include "../../dlb2_log.h"
> > +#include "../../dlb2_user.h"
> > +
> > +
> > +#define DLB2_PCI_REG_READ(addr)        rte_read32((void *)addr)
> > +#define DLB2_PCI_REG_WRITE(reg, value) rte_write32(value, (void *)reg)
> > +
> > +/* Read/write register 'reg' in the CSR BAR space */
> > +#define DLB2_CSR_REG_ADDR(a, reg) ((void *)((uintptr_t)(a)->csr_kva +
> (reg)))
> > +#define DLB2_CSR_RD(hw, reg) \
> > +       DLB2_PCI_REG_READ(DLB2_CSR_REG_ADDR((hw), (reg)))
> > +#define DLB2_CSR_WR(hw, reg, value) \
> > +       DLB2_PCI_REG_WRITE(DLB2_CSR_REG_ADDR((hw), (reg)), (value))
> > +
> > +/* Read/write register 'reg' in the func BAR space */
> > +#define DLB2_FUNC_REG_ADDR(a, reg) ((void *)((uintptr_t)(a)->func_kva +
> (reg)))
> > +#define DLB2_FUNC_RD(hw, reg) \
> > +       DLB2_PCI_REG_READ(DLB2_FUNC_REG_ADDR((hw), (reg)))
> > +#define DLB2_FUNC_WR(hw, reg, value) \
> > +       DLB2_PCI_REG_WRITE(DLB2_FUNC_REG_ADDR((hw), (reg)), (value))
> > +
> > +/* Map to PMDs logging interface */
> > +#define DLB2_ERR(dev, fmt, args...) \
> > +       DLB2_LOG_ERR(fmt, ## args)
> > +
> > +#define DLB2_INFO(dev, fmt, args...) \
> > +       DLB2_LOG_INFO(fmt, ## args)
> > +
> > +#define DLB2_DEBUG(dev, fmt, args...) \
> > +       DLB2_LOG_DBG(fmt, ## args)
> > +
> > +/**
> > + * os_udelay() - busy-wait for a number of microseconds
> > + * @usecs: delay duration.
> > + */
> > +static inline void os_udelay(int usecs)
> > +{
> > +       rte_delay_us(usecs);
> > +}
> > +
> > +/**
> > + * os_msleep() - sleep for a number of milliseconds
> > + * @usecs: delay duration.
> > + */
> > +static inline void os_msleep(int msecs)
> > +{
> > +       rte_delay_ms(msecs);
> > +}
> > +
> > +#define DLB2_PP_BASE(__is_ldb) \
> > +       ((__is_ldb) ? DLB2_LDB_PP_BASE : DLB2_DIR_PP_BASE)
> > +
> > +/**
> > + * os_map_producer_port() - map a producer port into the caller's address
> space
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @port_id: port ID
> > + * @is_ldb: true for load-balanced port, false for a directed port
> > + *
> > + * This function maps the requested producer port memory into the caller's
> > + * address space.
> > + *
> > + * Return:
> > + * Returns the base address at which the PP memory was mapped, else NULL.
> > + */
> > +static inline void *os_map_producer_port(struct dlb2_hw *hw,
> > +                                        u8 port_id,
> > +                                        bool is_ldb)
> > +{
> > +       uint64_t addr;
> > +       uint64_t pp_dma_base;
> > +
> > +       pp_dma_base = (uintptr_t)hw->func_kva + DLB2_PP_BASE(is_ldb);
> > +       addr = (pp_dma_base + (PAGE_SIZE * port_id));
> > +
> > +       return (void *)(uintptr_t)addr;
> > +}
> > +
> > +/**
> > + * os_unmap_producer_port() - unmap a producer port
> > + * @addr: mapped producer port address
> > + *
> > + * This function undoes os_map_producer_port() by unmapping the producer
> port
> > + * memory from the caller's address space.
> > + *
> > + * Return:
> > + * Returns the base address at which the PP memory was mapped, else NULL.
> > + */
> > +static inline void os_unmap_producer_port(struct dlb2_hw *hw, void *addr)
> > +{
> > +       RTE_SET_USED(hw);
> > +       RTE_SET_USED(addr);
> > +}
> > +
> > +/**
> > + * os_fence_hcw() - fence an HCW to ensure it arrives at the device
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @pp_addr: producer port address
> > + */
> > +static inline void os_fence_hcw(struct dlb2_hw *hw, u64 *pp_addr)
> > +{
> > +       RTE_SET_USED(hw);
> > +
> > +       /* To ensure outstanding HCWs reach the device, read the PP address. IA
> > +        * memory ordering prevents reads from passing older writes, and the
> > +        * mfence also ensures this.
> > +        */
> > +       rte_mb();
> > +
> > +       *(volatile u64 *)pp_addr;
> > +}
> > +
> > +/**
> > + * os_enqueue_four_hcws() - enqueue four HCWs to DLB
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @hcw: pointer to the 64B-aligned contiguous HCW memory
> > + * @addr: producer port address
> > + */
> > +static inline void os_enqueue_four_hcws(struct dlb2_hw *hw,
> > +                                       struct dlb2_hcw *hcw,
> > +                                       void *addr)
> > +{
> > +       struct dlb2_dev *dlb2_dev;
> > +
> > +       dlb2_dev = container_of(hw, struct dlb2_dev, hw);
> > +
> > +       dlb2_dev->enqueue_four(hcw, addr);
> > +}
> > +
> > +/**
> > + * DLB2_HW_ERR() - log an error message
> > + * @dlb2: dlb2_hw handle for a particular device.
> > + * @...: variable string args.
> > + */
> > +#define DLB2_HW_ERR(dlb2, ...) do {    \
> > +       RTE_SET_USED(dlb2);             \
> > +       DLB2_ERR(dlb2, __VA_ARGS__);    \
> > +} while (0)
> > +
> > +/**
> > + * DLB2_HW_DBG() - log an info message
> > + * @dlb2: dlb2_hw handle for a particular device.
> > + * @...: variable string args.
> > + */
> > +#define DLB2_HW_DBG(dlb2, ...) do {    \
> > +       RTE_SET_USED(dlb2);             \
> > +       DLB2_DEBUG(dlb2, __VA_ARGS__);  \
> > +} while (0)
> > +
> > +/* The callback runs until it completes all outstanding QID->CQ
> > + * map and unmap requests. To prevent deadlock, this function gives other
> > + * threads a chance to grab the resource mutex and configure hardware.
> > + */
> > +static void *dlb2_complete_queue_map_unmap(void *__args)
> > +{
> > +       struct dlb2_dev *dlb2_dev = (struct dlb2_dev *)__args;
> > +       int ret;
> > +
> > +       while (1) {
> > +               rte_spinlock_lock(&dlb2_dev->resource_mutex);
> > +
> > +               ret = dlb2_finish_unmap_qid_procedures(&dlb2_dev->hw);
> > +               ret += dlb2_finish_map_qid_procedures(&dlb2_dev->hw);
> > +
> > +               if (ret != 0) {
> > +                       rte_spinlock_unlock(&dlb2_dev->resource_mutex);
> > +                       /* Relinquish the CPU so the application can process
> > +                        * its CQs, so this function doesn't deadlock.
> > +                        */
> > +                       sched_yield();
> > +               } else {
> > +                       break;
> > +               }
> > +       }
> > +
> > +       dlb2_dev->worker_launched = false;
> > +
> > +       rte_spinlock_unlock(&dlb2_dev->resource_mutex);
> > +
> > +       return NULL;
> > +}
> > +
> > +
> > +/**
> > + * os_schedule_work() - launch a thread to process pending map and unmap
> work
> > + * @hw: dlb2_hw handle for a particular device.
> > + *
> > + * This function launches a kernel thread that will run until all pending
> > + * map and unmap procedures are complete.
> > + */
> > +static inline void os_schedule_work(struct dlb2_hw *hw)
> > +{
> > +       struct dlb2_dev *dlb2_dev;
> > +       pthread_t complete_queue_map_unmap_thread;
> > +       int ret;
> > +
> > +       dlb2_dev = container_of(hw, struct dlb2_dev, hw);
> > +
> > +       ret = rte_ctrl_thread_create(&complete_queue_map_unmap_thread,
> > +                                    "dlb_queue_unmap_waiter",
> > +                                    NULL,
> > +                                    dlb2_complete_queue_map_unmap,
> > +                                    dlb2_dev);
> > +       if (ret)
> > +               DLB2_ERR(dlb2_dev,
> > +                        "Could not create queue complete map/unmap thread,
> err=%d\n",
> > +                        ret);
> > +       else
> > +               dlb2_dev->worker_launched = true;
> > +}
> > +
> > +/**
> > + * os_worker_active() - query whether the map/unmap worker thread is
> active
> > + * @hw: dlb2_hw handle for a particular device.
> > + *
> > + * This function returns a boolean indicating whether a thread (launched by
> > + * os_schedule_work()) is active. This function is used to determine
> > + * whether or not to launch a worker thread.
> > + */
> > +static inline bool os_worker_active(struct dlb2_hw *hw)
> > +{
> > +       struct dlb2_dev *dlb2_dev;
> > +
> > +       dlb2_dev = container_of(hw, struct dlb2_dev, hw);
> > +
> > +       return dlb2_dev->worker_launched;
> > +}
> > +
> > +#endif /*  __DLB2_OSDEP_H */
> > diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep_bitmap.h
> b/drivers/event/dlb2/pf/base/dlb2_osdep_bitmap.h
> > new file mode 100644
> > index 0000000..423233b
> > --- /dev/null
> > +++ b/drivers/event/dlb2/pf/base/dlb2_osdep_bitmap.h
> > @@ -0,0 +1,440 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2016-2020 Intel Corporation
> > + */
> > +
> > +#ifndef __DLB2_OSDEP_BITMAP_H
> > +#define __DLB2_OSDEP_BITMAP_H
> > +
> > +#include <stdint.h>
> > +#include <stdbool.h>
> > +#include <stdio.h>
> > +#include <unistd.h>
> > +#include <rte_bitmap.h>
> > +#include <rte_string_fns.h>
> > +#include <rte_malloc.h>
> > +#include <rte_errno.h>
> > +#include "../dlb2_main.h"
> > +
> > +/*************************/
> > +/*** Bitmap operations ***/
> > +/*************************/
> > +struct dlb2_bitmap {
> > +       struct rte_bitmap *map;
> > +       unsigned int len;
> > +};
> > +
> > +/**
> > + * dlb2_bitmap_alloc() - alloc a bitmap data structure
> > + * @bitmap: pointer to dlb2_bitmap structure pointer.
> > + * @len: number of entries in the bitmap.
> > + *
> > + * This function allocates a bitmap and initializes it with length @len. All
> > + * entries are initially zero.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - bitmap is NULL or len is 0.
> > + * ENOMEM - could not allocate memory for the bitmap data structure.
> > + */
> > +static inline int dlb2_bitmap_alloc(struct dlb2_bitmap **bitmap,
> > +                                   unsigned int len)
> > +{
> > +       struct dlb2_bitmap *bm;
> > +       void *mem;
> > +       uint32_t alloc_size;
> > +       uint32_t nbits = (uint32_t)len;
> > +
> > +       if (bitmap == NULL || nbits == 0)
> > +               return -EINVAL;
> > +
> > +       /* Allocate DLB2 bitmap control struct */
> > +       bm = rte_malloc("DLB2_PF",
> > +                       sizeof(struct dlb2_bitmap),
> > +                       RTE_CACHE_LINE_SIZE);
> > +
> > +       if (bm == NULL)
> > +               return -ENOMEM;
> > +
> > +       /* Allocate bitmap memory */
> > +       alloc_size = rte_bitmap_get_memory_footprint(nbits);
> > +       mem = rte_malloc("DLB2_PF_BITMAP", alloc_size,
> RTE_CACHE_LINE_SIZE);
> > +       if (mem == NULL) {
> > +               rte_free(bm);
> > +               return -ENOMEM;
> > +       }
> > +
> > +       bm->map = rte_bitmap_init(len, mem, alloc_size);
> > +       if (bm->map == NULL) {
> > +               rte_free(mem);
> > +               rte_free(bm);
> > +               return -ENOMEM;
> > +       }
> > +
> > +       bm->len = len;
> > +
> > +       *bitmap = bm;
> > +
> > +       return 0;
> > +}
> > +
> > +/**
> > + * dlb2_bitmap_free() - free a previously allocated bitmap data structure
> > + * @bitmap: pointer to dlb2_bitmap structure.
> > + *
> > + * This function frees a bitmap that was allocated with dlb2_bitmap_alloc().
> > + */
> > +static inline void dlb2_bitmap_free(struct dlb2_bitmap *bitmap)
> > +{
> > +       if (bitmap == NULL)
> > +               return;
> > +
> > +       rte_free(bitmap->map);
> > +       rte_free(bitmap);
> > +}
> > +
> > +/**
> > + * dlb2_bitmap_fill() - fill a bitmap with all 1s
> > + * @bitmap: pointer to dlb2_bitmap structure.
> > + *
> > + * This function sets all bitmap values to 1.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - bitmap is NULL or is uninitialized.
> > + */
> > +static inline int dlb2_bitmap_fill(struct dlb2_bitmap *bitmap)
> > +{
> > +       unsigned int i;
> > +
> > +       if (bitmap  == NULL || bitmap->map == NULL)
> > +               return -EINVAL;
> > +
> > +       for (i = 0; i != bitmap->len; i++)
> > +               rte_bitmap_set(bitmap->map, i);
> > +
> > +       return 0;
> > +}
> > +
> > +/**
> > + * dlb2_bitmap_fill() - fill a bitmap with all 0s
> > + * @bitmap: pointer to dlb2_bitmap structure.
> > + *
> > + * This function sets all bitmap values to 0.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - bitmap is NULL or is uninitialized.
> > + */
> > +static inline int dlb2_bitmap_zero(struct dlb2_bitmap *bitmap)
> > +{
> > +       if (bitmap  == NULL || bitmap->map == NULL)
> > +               return -EINVAL;
> > +
> > +       rte_bitmap_reset(bitmap->map);
> > +
> > +       return 0;
> > +}
> > +
> > +/**
> > + * dlb2_bitmap_set() - set a bitmap entry
> > + * @bitmap: pointer to dlb2_bitmap structure.
> > + * @bit: bit index.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - bitmap is NULL or is uninitialized, or bit is larger than the
> > + *         bitmap length.
> > + */
> > +static inline int dlb2_bitmap_set(struct dlb2_bitmap *bitmap,
> > +                                 unsigned int bit)
> > +{
> > +       if (bitmap  == NULL || bitmap->map == NULL)
> > +               return -EINVAL;
> > +
> > +       if (bitmap->len <= bit)
> > +               return -EINVAL;
> > +
> > +       rte_bitmap_set(bitmap->map, bit);
> > +
> > +       return 0;
> > +}
> > +
> > +/**
> > + * dlb2_bitmap_set_range() - set a range of bitmap entries
> > + * @bitmap: pointer to dlb2_bitmap structure.
> > + * @bit: starting bit index.
> > + * @len: length of the range.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - bitmap is NULL or is uninitialized, or the range exceeds the bitmap
> > + *         length.
> > + */
> > +static inline int dlb2_bitmap_set_range(struct dlb2_bitmap *bitmap,
> > +                                       unsigned int bit,
> > +                                       unsigned int len)
> > +{
> > +       unsigned int i;
> > +
> > +       if (bitmap  == NULL || bitmap->map == NULL)
> > +               return -EINVAL;
> > +
> > +       if (bitmap->len <= bit)
> > +               return -EINVAL;
> > +
> > +       for (i = 0; i != len; i++)
> > +               rte_bitmap_set(bitmap->map, bit + i);
> > +
> > +       return 0;
> > +}
> > +
> > +/**
> > + * dlb2_bitmap_clear() - clear a bitmap entry
> > + * @bitmap: pointer to dlb2_bitmap structure.
> > + * @bit: bit index.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - bitmap is NULL or is uninitialized, or bit is larger than the
> > + *         bitmap length.
> > + */
> > +static inline int dlb2_bitmap_clear(struct dlb2_bitmap *bitmap,
> > +                                   unsigned int bit)
> > +{
> > +       if (bitmap  == NULL || bitmap->map == NULL)
> > +               return -EINVAL;
> > +
> > +       if (bitmap->len <= bit)
> > +               return -EINVAL;
> > +
> > +       rte_bitmap_clear(bitmap->map, bit);
> > +
> > +       return 0;
> > +}
> > +
> > +/**
> > + * dlb2_bitmap_clear_range() - clear a range of bitmap entries
> > + * @bitmap: pointer to dlb2_bitmap structure.
> > + * @bit: starting bit index.
> > + * @len: length of the range.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - bitmap is NULL or is uninitialized, or the range exceeds the bitmap
> > + *         length.
> > + */
> > +static inline int dlb2_bitmap_clear_range(struct dlb2_bitmap *bitmap,
> > +                                         unsigned int bit,
> > +                                         unsigned int len)
> > +{
> > +       unsigned int i;
> > +
> > +       if (bitmap  == NULL || bitmap->map == NULL)
> > +               return -EINVAL;
> > +
> > +       if (bitmap->len <= bit)
> > +               return -EINVAL;
> > +
> > +       for (i = 0; i != len; i++)
> > +               rte_bitmap_clear(bitmap->map, bit + i);
> > +
> > +       return 0;
> > +}
> > +
> > +/**
> > + * dlb2_bitmap_find_set_bit_range() - find an range of set bits
> > + * @bitmap: pointer to dlb2_bitmap structure.
> > + * @len: length of the range.
> > + *
> > + * This function looks for a range of set bits of length @len.
> > + *
> > + * Return:
> > + * Returns the base bit index upon success, < 0 otherwise.
> > + *
> > + * Errors:
> > + * ENOENT - unable to find a length *len* range of set bits.
> > + * EINVAL - bitmap is NULL or is uninitialized, or len is invalid.
> > + */
> > +static inline int dlb2_bitmap_find_set_bit_range(struct dlb2_bitmap *bitmap,
> > +                                                unsigned int len)
> > +{
> > +       unsigned int i, j = 0;
> > +
> > +       if (bitmap  == NULL || bitmap->map == NULL || len == 0)
> > +               return -EINVAL;
> > +
> > +       if (bitmap->len < len)
> > +               return -ENOENT;
> > +
> > +       for (i = 0; i != bitmap->len; i++) {
> > +               if  (rte_bitmap_get(bitmap->map, i)) {
> > +                       if (++j == len)
> > +                               return i - j + 1;
> > +               } else {
> > +                       j = 0;
> > +               }
> > +       }
> > +
> > +       /* No set bit range of length len? */
> > +       return -ENOENT;
> > +}
> > +
> > +/**
> > + * dlb2_bitmap_find_set_bit() - find an range of set bits
> > + * @bitmap: pointer to dlb2_bitmap structure.
> > + *
> > + * This function looks for a single set bit.
> > + *
> > + * Return:
> > + * Returns the base bit index upon success, -1 if not found, <-1 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - bitmap is NULL or is uninitialized, or len is invalid.
> > + */
> > +static inline int dlb2_bitmap_find_set_bit(struct dlb2_bitmap *bitmap)
> > +{
> > +       unsigned int i;
> > +
> > +       if (bitmap == NULL)
> > +               return -EINVAL;
> > +
> > +       if (bitmap->map == NULL)
> > +               return -EINVAL;
> > +
> > +       for (i = 0; i != bitmap->len; i++) {
> > +               if  (rte_bitmap_get(bitmap->map, i))
> > +                       return i;
> > +       }
> > +
> > +       return -ENOENT;
> > +}
> > +
> > +/**
> > + * dlb2_bitmap_count() - returns the number of set bits
> > + * @bitmap: pointer to dlb2_bitmap structure.
> > + *
> > + * This function looks for a single set bit.
> > + *
> > + * Return:
> > + * Returns the number of set bits upon success, <0 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - bitmap is NULL or is uninitialized.
> > + */
> > +static inline int dlb2_bitmap_count(struct dlb2_bitmap *bitmap)
> > +{
> > +       int weight = 0;
> > +       unsigned int i;
> > +
> > +       if (bitmap == NULL)
> > +               return -EINVAL;
> > +
> > +       if (bitmap->map == NULL)
> > +               return -EINVAL;
> > +
> > +       for (i = 0; i != bitmap->len; i++) {
> > +               if  (rte_bitmap_get(bitmap->map, i))
> > +                       weight++;
> > +       }
> > +       return weight;
> > +}
> > +
> > +/**
> > + * dlb2_bitmap_longest_set_range() - returns longest contiguous range of set
> > + *                                   bits
> > + * @bitmap: pointer to dlb2_bitmap structure.
> > + *
> > + * Return:
> > + * Returns the bitmap's longest contiguous range of of set bits upon success,
> > + * <0 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - bitmap is NULL or is uninitialized.
> > + */
> > +static inline int dlb2_bitmap_longest_set_range(struct dlb2_bitmap *bitmap)
> > +{
> > +       int max_len = 0, len = 0;
> > +       unsigned int i;
> > +
> > +       if (bitmap == NULL)
> > +               return -EINVAL;
> > +
> > +       if (bitmap->map == NULL)
> > +               return -EINVAL;
> > +
> > +       for (i = 0; i != bitmap->len; i++) {
> > +               if  (rte_bitmap_get(bitmap->map, i)) {
> > +                       len++;
> > +               } else {
> > +                       if (len > max_len)
> > +                               max_len = len;
> > +                       len = 0;
> > +               }
> > +       }
> > +
> > +       if (len > max_len)
> > +               max_len = len;
> > +
> > +       return max_len;
> > +}
> > +
> > +/**
> > + * dlb2_bitmap_or() - store the logical 'or' of two bitmaps into a third
> > + * @dest: pointer to dlb2_bitmap structure, which will contain the results of
> > + *       the 'or' of src1 and src2.
> > + * @src1: pointer to dlb2_bitmap structure, will be 'or'ed with src2.
> > + * @src2: pointer to dlb2_bitmap structure, will be 'or'ed with src1.
> > + *
> > + * This function 'or's two bitmaps together and stores the result in a third
> > + * bitmap. The source and destination bitmaps can be the same.
> > + *
> > + * Return:
> > + * Returns the number of set bits upon success, <0 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - One of the bitmaps is NULL or is uninitialized.
> > + */
> > +static inline int dlb2_bitmap_or(struct dlb2_bitmap *dest,
> > +                                struct dlb2_bitmap *src1,
> > +                                struct dlb2_bitmap *src2)
> > +{
> > +       unsigned int i, min;
> > +       int numset = 0;
> > +
> > +       if (dest  == NULL || dest->map == NULL ||
> > +           src1  == NULL || src1->map == NULL ||
> > +           src2  == NULL || src2->map == NULL)
> > +               return -EINVAL;
> > +
> > +       min = dest->len;
> > +       min = (min > src1->len) ? src1->len : min;
> > +       min = (min > src2->len) ? src2->len : min;
> > +
> > +       for (i = 0; i != min; i++) {
> > +               if  (rte_bitmap_get(src1->map, i) ||
> > +                    rte_bitmap_get(src2->map, i)) {
> > +                       rte_bitmap_set(dest->map, i);
> > +                       numset++;
> > +               } else {
> > +                       rte_bitmap_clear(dest->map, i);
> > +               }
> > +       }
> > +
> > +       return numset;
> > +}
> > +
> > +#endif /*  __DLB2_OSDEP_BITMAP_H */
> > diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep_list.h
> b/drivers/event/dlb2/pf/base/dlb2_osdep_list.h
> > new file mode 100644
> > index 0000000..5531739
> > --- /dev/null
> > +++ b/drivers/event/dlb2/pf/base/dlb2_osdep_list.h
> > @@ -0,0 +1,131 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2016-2020 Intel Corporation
> > + */
> > +
> > +#ifndef __DLB2_OSDEP_LIST_H
> > +#define __DLB2_OSDEP_LIST_H
> > +
> > +#include <rte_tailq.h>
> > +
> > +struct dlb2_list_entry {
> > +       TAILQ_ENTRY(dlb2_list_entry) node;
> > +};
> > +
> > +/* Dummy - just a struct definition */
> > +TAILQ_HEAD(dlb2_list_head, dlb2_list_entry);
> > +
> > +/* =================
> > + * TAILQ Supplements
> > + * =================
> > + */
> > +
> > +#ifndef TAILQ_FOREACH_ENTRY
> > +#define TAILQ_FOREACH_ENTRY(ptr, head, name, iter)             \
> > +       for ((iter) = TAILQ_FIRST(&head);                       \
> > +           (iter)                                              \
> > +               && (ptr = container_of(iter, typeof(*(ptr)), name)); \
> > +           (iter) = TAILQ_NEXT((iter), node))
> > +#endif
> > +
> > +#ifndef TAILQ_FOREACH_ENTRY_SAFE
> > +#define TAILQ_FOREACH_ENTRY_SAFE(ptr, head, name, iter, tvar)  \
> > +       for ((iter) = TAILQ_FIRST(&head);                       \
> > +           (iter) &&                                           \
> > +               (ptr = container_of(iter, typeof(*(ptr)), name)) &&\
> > +               ((tvar) = TAILQ_NEXT((iter), node), 1); \
> > +           (iter) = (tvar))
> > +#endif
> > +
> > +/***********************/
> > +/*** List operations ***/
> > +/***********************/
> > +
> > +/**
> > + * dlb2_list_init_head() - initialize the head of a list
> > + * @head: list head
> > + */
> > +static inline void dlb2_list_init_head(struct dlb2_list_head *head)
> > +{
> > +       TAILQ_INIT(head);
> > +}
> > +
> > +/**
> > + * dlb2_list_add() - add an entry to a list
> > + * @head: list head
> > + * @entry: new list entry
> > + */
> > +static inline void
> > +dlb2_list_add(struct dlb2_list_head *head, struct dlb2_list_entry *entry)
> > +{
> > +       TAILQ_INSERT_TAIL(head, entry, node);
> > +}
> > +
> > +/**
> > + * dlb2_list_del() - delete an entry from a list
> > + * @entry: list entry
> > + * @head: list head
> > + */
> > +static inline void dlb2_list_del(struct dlb2_list_head *head,
> > +                                struct dlb2_list_entry *entry)
> > +{
> > +       TAILQ_REMOVE(head, entry, node);
> > +}
> > +
> > +/**
> > + * dlb2_list_empty() - check if a list is empty
> > + * @head: list head
> > + *
> > + * Return:
> > + * Returns 1 if empty, 0 if not.
> > + */
> > +static inline int dlb2_list_empty(struct dlb2_list_head *head)
> > +{
> > +       return TAILQ_EMPTY(head);
> > +}
> > +
> > +/**
> > + * dlb2_list_splice() - splice a list
> > + * @src_head: list to be added
> > + * @ head: where src_head will be inserted
> > + */
> > +static inline void dlb2_list_splice(struct dlb2_list_head *src_head,
> > +                                   struct dlb2_list_head *head)
> > +{
> > +       TAILQ_CONCAT(head, src_head, node);
> > +}
> > +
> > +/**
> > + * DLB2_LIST_HEAD() - retrieve the head of the list
> > + * @head: list head
> > + * @type: type of the list variable
> > + * @name: name of the list field within the containing struct
> > + */
> > +#define DLB2_LIST_HEAD(head, type, name)                       \
> > +       (TAILQ_FIRST(&head) ?                                   \
> > +               container_of(TAILQ_FIRST(&head), type, name) :  \
> > +               NULL)
> > +
> > +/**
> > + * DLB2_LIST_FOR_EACH() - iterate over a list
> > + * @head: list head
> > + * @ptr: pointer to struct containing a struct list
> > + * @name: name of the list field within the containing struct
> > + * @iter: iterator variable
> > + */
> > +#define DLB2_LIST_FOR_EACH(head, ptr, name, tmp_iter) \
> > +       TAILQ_FOREACH_ENTRY(ptr, head, name, tmp_iter)
> > +
> > +/**
> > + * DLB2_LIST_FOR_EACH_SAFE() - iterate over a list. This loop works even if
> > + * an element is removed from the list while processing it.
> > + * @ptr: pointer to struct containing a struct list
> > + * @ptr_tmp: pointer to struct containing a struct list (temporary)
> > + * @head: list head
> > + * @name: name of the list field within the containing struct
> > + * @iter: iterator variable
> > + * @iter_tmp: iterator variable (temporary)
> > + */
> > +#define DLB2_LIST_FOR_EACH_SAFE(head, ptr, ptr_tmp, name, tmp_iter,
> saf_itr) \
> > +       TAILQ_FOREACH_ENTRY_SAFE(ptr, head, name, tmp_iter, saf_itr)
> > +
> > +#endif /*  __DLB2_OSDEP_LIST_H */
> > diff --git a/drivers/event/dlb2/pf/base/dlb2_osdep_types.h
> b/drivers/event/dlb2/pf/base/dlb2_osdep_types.h
> > new file mode 100644
> > index 0000000..0a48f7e
> > --- /dev/null
> > +++ b/drivers/event/dlb2/pf/base/dlb2_osdep_types.h
> > @@ -0,0 +1,31 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2016-2020 Intel Corporation
> > + */
> > +
> > +#ifndef __DLB2_OSDEP_TYPES_H
> > +#define __DLB2_OSDEP_TYPES_H
> > +
> > +#include <linux/types.h>
> > +
> > +#include <inttypes.h>
> > +#include <ctype.h>
> > +#include <stdint.h>
> > +#include <stdbool.h>
> > +#include <string.h>
> > +#include <unistd.h>
> > +#include <errno.h>
> > +
> > +/* Types for user mode PF PMD */
> > +typedef uint8_t         u8;
> > +typedef int8_t          s8;
> > +typedef uint16_t        u16;
> > +typedef int16_t         s16;
> > +typedef uint32_t        u32;
> > +typedef int32_t         s32;
> > +typedef uint64_t        u64;
> > +
> > +#define __iomem
> > +
> > +/* END types for user mode PF PMD */
> > +
> > +#endif /* __DLB2_OSDEP_TYPES_H */
> > diff --git a/drivers/event/dlb2/pf/base/dlb2_regs.h
> b/drivers/event/dlb2/pf/base/dlb2_regs.h
> > new file mode 100644
> > index 0000000..43ecad4
> > --- /dev/null
> > +++ b/drivers/event/dlb2/pf/base/dlb2_regs.h
> > @@ -0,0 +1,2527 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2016-2020 Intel Corporation
> > + */
> > +
> > +#ifndef __DLB2_REGS_H
> > +#define __DLB2_REGS_H
> > +
> > +#include "dlb2_osdep_types.h"
> > +
> > +#define DLB2_FUNC_PF_VF2PF_MAILBOX_BYTES 256
> > +#define DLB2_FUNC_PF_VF2PF_MAILBOX(vf_id, x) \
> > +       (0x1000 + 0x4 * (x) + (vf_id) * 0x10000)
> > +#define DLB2_FUNC_PF_VF2PF_MAILBOX_RST 0x0
> > +union dlb2_func_pf_vf2pf_mailbox {
> > +       struct {
> > +               u32 msg : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_FUNC_PF_VF2PF_MAILBOX_ISR(vf_id) \
> > +       (0x1f00 + (vf_id) * 0x10000)
> > +#define DLB2_FUNC_PF_VF2PF_MAILBOX_ISR_RST 0x0
> > +union dlb2_func_pf_vf2pf_mailbox_isr {
> > +       struct {
> > +               u32 vf0_isr : 1;
> > +               u32 vf1_isr : 1;
> > +               u32 vf2_isr : 1;
> > +               u32 vf3_isr : 1;
> > +               u32 vf4_isr : 1;
> > +               u32 vf5_isr : 1;
> > +               u32 vf6_isr : 1;
> > +               u32 vf7_isr : 1;
> > +               u32 vf8_isr : 1;
> > +               u32 vf9_isr : 1;
> > +               u32 vf10_isr : 1;
> > +               u32 vf11_isr : 1;
> > +               u32 vf12_isr : 1;
> > +               u32 vf13_isr : 1;
> > +               u32 vf14_isr : 1;
> > +               u32 vf15_isr : 1;
> > +               u32 rsvd0 : 16;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_FUNC_PF_VF2PF_FLR_ISR(vf_id) \
> > +       (0x1f04 + (vf_id) * 0x10000)
> > +#define DLB2_FUNC_PF_VF2PF_FLR_ISR_RST 0x0
> > +union dlb2_func_pf_vf2pf_flr_isr {
> > +       struct {
> > +               u32 vf0_isr : 1;
> > +               u32 vf1_isr : 1;
> > +               u32 vf2_isr : 1;
> > +               u32 vf3_isr : 1;
> > +               u32 vf4_isr : 1;
> > +               u32 vf5_isr : 1;
> > +               u32 vf6_isr : 1;
> > +               u32 vf7_isr : 1;
> > +               u32 vf8_isr : 1;
> > +               u32 vf9_isr : 1;
> > +               u32 vf10_isr : 1;
> > +               u32 vf11_isr : 1;
> > +               u32 vf12_isr : 1;
> > +               u32 vf13_isr : 1;
> > +               u32 vf14_isr : 1;
> > +               u32 vf15_isr : 1;
> > +               u32 rsvd0 : 16;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_FUNC_PF_VF2PF_ISR_PEND(vf_id) \
> > +       (0x1f10 + (vf_id) * 0x10000)
> > +#define DLB2_FUNC_PF_VF2PF_ISR_PEND_RST 0x0
> > +union dlb2_func_pf_vf2pf_isr_pend {
> > +       struct {
> > +               u32 isr_pend : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_FUNC_PF_PF2VF_MAILBOX_BYTES 64
> > +#define DLB2_FUNC_PF_PF2VF_MAILBOX(vf_id, x) \
> > +       (0x2000 + 0x4 * (x) + (vf_id) * 0x10000)
> > +#define DLB2_FUNC_PF_PF2VF_MAILBOX_RST 0x0
> > +union dlb2_func_pf_pf2vf_mailbox {
> > +       struct {
> > +               u32 msg : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_FUNC_PF_PF2VF_MAILBOX_ISR(vf_id) \
> > +       (0x2f00 + (vf_id) * 0x10000)
> > +#define DLB2_FUNC_PF_PF2VF_MAILBOX_ISR_RST 0x0
> > +union dlb2_func_pf_pf2vf_mailbox_isr {
> > +       struct {
> > +               u32 vf0_isr : 1;
> > +               u32 vf1_isr : 1;
> > +               u32 vf2_isr : 1;
> > +               u32 vf3_isr : 1;
> > +               u32 vf4_isr : 1;
> > +               u32 vf5_isr : 1;
> > +               u32 vf6_isr : 1;
> > +               u32 vf7_isr : 1;
> > +               u32 vf8_isr : 1;
> > +               u32 vf9_isr : 1;
> > +               u32 vf10_isr : 1;
> > +               u32 vf11_isr : 1;
> > +               u32 vf12_isr : 1;
> > +               u32 vf13_isr : 1;
> > +               u32 vf14_isr : 1;
> > +               u32 vf15_isr : 1;
> > +               u32 rsvd0 : 16;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_FUNC_PF_VF_RESET_IN_PROGRESS(vf_id) \
> > +       (0x3000 + (vf_id) * 0x10000)
> > +#define DLB2_FUNC_PF_VF_RESET_IN_PROGRESS_RST 0xffff
> > +union dlb2_func_pf_vf_reset_in_progress {
> > +       struct {
> > +               u32 vf0_reset_in_progress : 1;
> > +               u32 vf1_reset_in_progress : 1;
> > +               u32 vf2_reset_in_progress : 1;
> > +               u32 vf3_reset_in_progress : 1;
> > +               u32 vf4_reset_in_progress : 1;
> > +               u32 vf5_reset_in_progress : 1;
> > +               u32 vf6_reset_in_progress : 1;
> > +               u32 vf7_reset_in_progress : 1;
> > +               u32 vf8_reset_in_progress : 1;
> > +               u32 vf9_reset_in_progress : 1;
> > +               u32 vf10_reset_in_progress : 1;
> > +               u32 vf11_reset_in_progress : 1;
> > +               u32 vf12_reset_in_progress : 1;
> > +               u32 vf13_reset_in_progress : 1;
> > +               u32 vf14_reset_in_progress : 1;
> > +               u32 vf15_reset_in_progress : 1;
> > +               u32 rsvd0 : 16;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_MSIX_MEM_VECTOR_CTRL(x) \
> > +       (0x100000c + (x) * 0x10)
> > +#define DLB2_MSIX_MEM_VECTOR_CTRL_RST 0x1
> > +union dlb2_msix_mem_vector_ctrl {
> > +       struct {
> > +               u32 vec_mask : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_IOSF_FUNC_VF_BAR_DSBL(x) \
> > +       (0x20 + (x) * 0x4)
> > +#define DLB2_IOSF_FUNC_VF_BAR_DSBL_RST 0x0
> > +union dlb2_iosf_func_vf_bar_dsbl {
> > +       struct {
> > +               u32 func_vf_bar_dis : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_TOTAL_VAS 0x1000011c
> > +#define DLB2_SYS_TOTAL_VAS_RST 0x20
> > +union dlb2_sys_total_vas {
> > +       struct {
> > +               u32 total_vas : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_TOTAL_DIR_PORTS 0x10000118
> > +#define DLB2_SYS_TOTAL_DIR_PORTS_RST 0x40
> > +union dlb2_sys_total_dir_ports {
> > +       struct {
> > +               u32 total_dir_ports : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_TOTAL_LDB_PORTS 0x10000114
> > +#define DLB2_SYS_TOTAL_LDB_PORTS_RST 0x40
> > +union dlb2_sys_total_ldb_ports {
> > +       struct {
> > +               u32 total_ldb_ports : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_TOTAL_DIR_QID 0x10000110
> > +#define DLB2_SYS_TOTAL_DIR_QID_RST 0x40
> > +union dlb2_sys_total_dir_qid {
> > +       struct {
> > +               u32 total_dir_qid : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_TOTAL_LDB_QID 0x1000010c
> > +#define DLB2_SYS_TOTAL_LDB_QID_RST 0x20
> > +union dlb2_sys_total_ldb_qid {
> > +       struct {
> > +               u32 total_ldb_qid : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_TOTAL_DIR_CRDS 0x10000108
> > +#define DLB2_SYS_TOTAL_DIR_CRDS_RST 0x1000
> > +union dlb2_sys_total_dir_crds {
> > +       struct {
> > +               u32 total_dir_credits : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_TOTAL_LDB_CRDS 0x10000104
> > +#define DLB2_SYS_TOTAL_LDB_CRDS_RST 0x2000
> > +union dlb2_sys_total_ldb_crds {
> > +       struct {
> > +               u32 total_ldb_credits : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_ALARM_PF_SYND2 0x10000508
> > +#define DLB2_SYS_ALARM_PF_SYND2_RST 0x0
> > +union dlb2_sys_alarm_pf_synd2 {
> > +       struct {
> > +               u32 lock_id : 16;
> > +               u32 meas : 1;
> > +               u32 debug : 7;
> > +               u32 cq_pop : 1;
> > +               u32 qe_uhl : 1;
> > +               u32 qe_orsp : 1;
> > +               u32 qe_valid : 1;
> > +               u32 cq_int_rearm : 1;
> > +               u32 dsi_error : 1;
> > +               u32 rsvd0 : 2;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_ALARM_PF_SYND1 0x10000504
> > +#define DLB2_SYS_ALARM_PF_SYND1_RST 0x0
> > +union dlb2_sys_alarm_pf_synd1 {
> > +       struct {
> > +               u32 dsi : 16;
> > +               u32 qid : 8;
> > +               u32 qtype : 2;
> > +               u32 qpri : 3;
> > +               u32 msg_type : 3;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_ALARM_PF_SYND0 0x10000500
> > +#define DLB2_SYS_ALARM_PF_SYND0_RST 0x0
> > +union dlb2_sys_alarm_pf_synd0 {
> > +       struct {
> > +               u32 syndrome : 8;
> > +               u32 rtype : 2;
> > +               u32 rsvd0 : 3;
> > +               u32 is_ldb : 1;
> > +               u32 cls : 2;
> > +               u32 aid : 6;
> > +               u32 unit : 4;
> > +               u32 source : 4;
> > +               u32 more : 1;
> > +               u32 valid : 1;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_VF_LDB_VPP_V(x) \
> > +       (0x10000f00 + (x) * 0x1000)
> > +#define DLB2_SYS_VF_LDB_VPP_V_RST 0x0
> > +union dlb2_sys_vf_ldb_vpp_v {
> > +       struct {
> > +               u32 vpp_v : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_VF_LDB_VPP2PP(x) \
> > +       (0x10000f04 + (x) * 0x1000)
> > +#define DLB2_SYS_VF_LDB_VPP2PP_RST 0x0
> > +union dlb2_sys_vf_ldb_vpp2pp {
> > +       struct {
> > +               u32 pp : 6;
> > +               u32 rsvd0 : 26;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_VF_DIR_VPP_V(x) \
> > +       (0x10000f08 + (x) * 0x1000)
> > +#define DLB2_SYS_VF_DIR_VPP_V_RST 0x0
> > +union dlb2_sys_vf_dir_vpp_v {
> > +       struct {
> > +               u32 vpp_v : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_VF_DIR_VPP2PP(x) \
> > +       (0x10000f0c + (x) * 0x1000)
> > +#define DLB2_SYS_VF_DIR_VPP2PP_RST 0x0
> > +union dlb2_sys_vf_dir_vpp2pp {
> > +       struct {
> > +               u32 pp : 6;
> > +               u32 rsvd0 : 26;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_VF_LDB_VQID_V(x) \
> > +       (0x10000f10 + (x) * 0x1000)
> > +#define DLB2_SYS_VF_LDB_VQID_V_RST 0x0
> > +union dlb2_sys_vf_ldb_vqid_v {
> > +       struct {
> > +               u32 vqid_v : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_VF_LDB_VQID2QID(x) \
> > +       (0x10000f14 + (x) * 0x1000)
> > +#define DLB2_SYS_VF_LDB_VQID2QID_RST 0x0
> > +union dlb2_sys_vf_ldb_vqid2qid {
> > +       struct {
> > +               u32 qid : 5;
> > +               u32 rsvd0 : 27;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_LDB_QID2VQID(x) \
> > +       (0x10000f18 + (x) * 0x1000)
> > +#define DLB2_SYS_LDB_QID2VQID_RST 0x0
> > +union dlb2_sys_ldb_qid2vqid {
> > +       struct {
> > +               u32 vqid : 5;
> > +               u32 rsvd0 : 27;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_VF_DIR_VQID_V(x) \
> > +       (0x10000f1c + (x) * 0x1000)
> > +#define DLB2_SYS_VF_DIR_VQID_V_RST 0x0
> > +union dlb2_sys_vf_dir_vqid_v {
> > +       struct {
> > +               u32 vqid_v : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_VF_DIR_VQID2QID(x) \
> > +       (0x10000f20 + (x) * 0x1000)
> > +#define DLB2_SYS_VF_DIR_VQID2QID_RST 0x0
> > +union dlb2_sys_vf_dir_vqid2qid {
> > +       struct {
> > +               u32 qid : 6;
> > +               u32 rsvd0 : 26;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_LDB_VASQID_V(x) \
> > +       (0x10000f24 + (x) * 0x1000)
> > +#define DLB2_SYS_LDB_VASQID_V_RST 0x0
> > +union dlb2_sys_ldb_vasqid_v {
> > +       struct {
> > +               u32 vasqid_v : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_DIR_VASQID_V(x) \
> > +       (0x10000f28 + (x) * 0x1000)
> > +#define DLB2_SYS_DIR_VASQID_V_RST 0x0
> > +union dlb2_sys_dir_vasqid_v {
> > +       struct {
> > +               u32 vasqid_v : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_ALARM_VF_SYND2(x) \
> > +       (0x10000f48 + (x) * 0x1000)
> > +#define DLB2_SYS_ALARM_VF_SYND2_RST 0x0
> > +union dlb2_sys_alarm_vf_synd2 {
> > +       struct {
> > +               u32 lock_id : 16;
> > +               u32 debug : 8;
> > +               u32 cq_pop : 1;
> > +               u32 qe_uhl : 1;
> > +               u32 qe_orsp : 1;
> > +               u32 qe_valid : 1;
> > +               u32 isz : 1;
> > +               u32 dsi_error : 1;
> > +               u32 dlbrsvd : 2;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_ALARM_VF_SYND1(x) \
> > +       (0x10000f44 + (x) * 0x1000)
> > +#define DLB2_SYS_ALARM_VF_SYND1_RST 0x0
> > +union dlb2_sys_alarm_vf_synd1 {
> > +       struct {
> > +               u32 dsi : 16;
> > +               u32 qid : 8;
> > +               u32 qtype : 2;
> > +               u32 qpri : 3;
> > +               u32 msg_type : 3;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_ALARM_VF_SYND0(x) \
> > +       (0x10000f40 + (x) * 0x1000)
> > +#define DLB2_SYS_ALARM_VF_SYND0_RST 0x0
> > +union dlb2_sys_alarm_vf_synd0 {
> > +       struct {
> > +               u32 syndrome : 8;
> > +               u32 rtype : 2;
> > +               u32 vf_synd0_parity : 1;
> > +               u32 vf_synd1_parity : 1;
> > +               u32 vf_synd2_parity : 1;
> > +               u32 is_ldb : 1;
> > +               u32 cls : 2;
> > +               u32 aid : 6;
> > +               u32 unit : 4;
> > +               u32 source : 4;
> > +               u32 more : 1;
> > +               u32 valid : 1;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_LDB_QID_CFG_V(x) \
> > +       (0x10000f58 + (x) * 0x1000)
> > +#define DLB2_SYS_LDB_QID_CFG_V_RST 0x0
> > +union dlb2_sys_ldb_qid_cfg_v {
> > +       struct {
> > +               u32 sn_cfg_v : 1;
> > +               u32 fid_cfg_v : 1;
> > +               u32 rsvd0 : 30;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_LDB_QID_ITS(x) \
> > +       (0x10000f54 + (x) * 0x1000)
> > +#define DLB2_SYS_LDB_QID_ITS_RST 0x0
> > +union dlb2_sys_ldb_qid_its {
> > +       struct {
> > +               u32 qid_its : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_LDB_QID_V(x) \
> > +       (0x10000f50 + (x) * 0x1000)
> > +#define DLB2_SYS_LDB_QID_V_RST 0x0
> > +union dlb2_sys_ldb_qid_v {
> > +       struct {
> > +               u32 qid_v : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_DIR_QID_ITS(x) \
> > +       (0x10000f64 + (x) * 0x1000)
> > +#define DLB2_SYS_DIR_QID_ITS_RST 0x0
> > +union dlb2_sys_dir_qid_its {
> > +       struct {
> > +               u32 qid_its : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_DIR_QID_V(x) \
> > +       (0x10000f60 + (x) * 0x1000)
> > +#define DLB2_SYS_DIR_QID_V_RST 0x0
> > +union dlb2_sys_dir_qid_v {
> > +       struct {
> > +               u32 qid_v : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_LDB_CQ_AI_DATA(x) \
> > +       (0x10000fa8 + (x) * 0x1000)
> > +#define DLB2_SYS_LDB_CQ_AI_DATA_RST 0x0
> > +union dlb2_sys_ldb_cq_ai_data {
> > +       struct {
> > +               u32 cq_ai_data : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_LDB_CQ_AI_ADDR(x) \
> > +       (0x10000fa4 + (x) * 0x1000)
> > +#define DLB2_SYS_LDB_CQ_AI_ADDR_RST 0x0
> > +union dlb2_sys_ldb_cq_ai_addr {
> > +       struct {
> > +               u32 rsvd1 : 2;
> > +               u32 cq_ai_addr : 18;
> > +               u32 rsvd0 : 12;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_LDB_CQ_PASID(x) \
> > +       (0x10000fa0 + (x) * 0x1000)
> > +#define DLB2_SYS_LDB_CQ_PASID_RST 0x0
> > +union dlb2_sys_ldb_cq_pasid {
> > +       struct {
> > +               u32 pasid : 20;
> > +               u32 exe_req : 1;
> > +               u32 priv_req : 1;
> > +               u32 fmt2 : 1;
> > +               u32 rsvd0 : 9;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_LDB_CQ_AT(x) \
> > +       (0x10000f9c + (x) * 0x1000)
> > +#define DLB2_SYS_LDB_CQ_AT_RST 0x0
> > +union dlb2_sys_ldb_cq_at {
> > +       struct {
> > +               u32 cq_at : 2;
> > +               u32 rsvd0 : 30;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_LDB_CQ_ISR(x) \
> > +       (0x10000f98 + (x) * 0x1000)
> > +#define DLB2_SYS_LDB_CQ_ISR_RST 0x0
> > +/* CQ Interrupt Modes */
> > +#define DLB2_CQ_ISR_MODE_DIS  0
> > +#define DLB2_CQ_ISR_MODE_MSI  1
> > +#define DLB2_CQ_ISR_MODE_MSIX 2
> > +#define DLB2_CQ_ISR_MODE_ADI  3
> > +union dlb2_sys_ldb_cq_isr {
> > +       struct {
> > +               u32 vector : 6;
> > +               u32 vf : 4;
> > +               u32 en_code : 2;
> > +               u32 rsvd0 : 20;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_LDB_CQ2VF_PF_RO(x) \
> > +       (0x10000f94 + (x) * 0x1000)
> > +#define DLB2_SYS_LDB_CQ2VF_PF_RO_RST 0x0
> > +union dlb2_sys_ldb_cq2vf_pf_ro {
> > +       struct {
> > +               u32 vf : 4;
> > +               u32 is_pf : 1;
> > +               u32 ro : 1;
> > +               u32 rsvd0 : 26;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_LDB_PP_V(x) \
> > +       (0x10000f90 + (x) * 0x1000)
> > +#define DLB2_SYS_LDB_PP_V_RST 0x0
> > +union dlb2_sys_ldb_pp_v {
> > +       struct {
> > +               u32 pp_v : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_LDB_PP2VDEV(x) \
> > +       (0x10000f8c + (x) * 0x1000)
> > +#define DLB2_SYS_LDB_PP2VDEV_RST 0x0
> > +union dlb2_sys_ldb_pp2vdev {
> > +       struct {
> > +               u32 vdev : 4;
> > +               u32 rsvd0 : 28;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_LDB_PP2VAS(x) \
> > +       (0x10000f88 + (x) * 0x1000)
> > +#define DLB2_SYS_LDB_PP2VAS_RST 0x0
> > +union dlb2_sys_ldb_pp2vas {
> > +       struct {
> > +               u32 vas : 5;
> > +               u32 rsvd0 : 27;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_LDB_CQ_ADDR_U(x) \
> > +       (0x10000f84 + (x) * 0x1000)
> > +#define DLB2_SYS_LDB_CQ_ADDR_U_RST 0x0
> > +union dlb2_sys_ldb_cq_addr_u {
> > +       struct {
> > +               u32 addr_u : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_LDB_CQ_ADDR_L(x) \
> > +       (0x10000f80 + (x) * 0x1000)
> > +#define DLB2_SYS_LDB_CQ_ADDR_L_RST 0x0
> > +union dlb2_sys_ldb_cq_addr_l {
> > +       struct {
> > +               u32 rsvd0 : 6;
> > +               u32 addr_l : 26;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_DIR_CQ_FMT(x) \
> > +       (0x10000fec + (x) * 0x1000)
> > +#define DLB2_SYS_DIR_CQ_FMT_RST 0x0
> > +union dlb2_sys_dir_cq_fmt {
> > +       struct {
> > +               u32 keep_pf_ppid : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_DIR_CQ_AI_DATA(x) \
> > +       (0x10000fe8 + (x) * 0x1000)
> > +#define DLB2_SYS_DIR_CQ_AI_DATA_RST 0x0
> > +union dlb2_sys_dir_cq_ai_data {
> > +       struct {
> > +               u32 cq_ai_data : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_DIR_CQ_AI_ADDR(x) \
> > +       (0x10000fe4 + (x) * 0x1000)
> > +#define DLB2_SYS_DIR_CQ_AI_ADDR_RST 0x0
> > +union dlb2_sys_dir_cq_ai_addr {
> > +       struct {
> > +               u32 rsvd1 : 2;
> > +               u32 cq_ai_addr : 18;
> > +               u32 rsvd0 : 12;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_DIR_CQ_PASID(x) \
> > +       (0x10000fe0 + (x) * 0x1000)
> > +#define DLB2_SYS_DIR_CQ_PASID_RST 0x0
> > +union dlb2_sys_dir_cq_pasid {
> > +       struct {
> > +               u32 pasid : 20;
> > +               u32 exe_req : 1;
> > +               u32 priv_req : 1;
> > +               u32 fmt2 : 1;
> > +               u32 rsvd0 : 9;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_DIR_CQ_AT(x) \
> > +       (0x10000fdc + (x) * 0x1000)
> > +#define DLB2_SYS_DIR_CQ_AT_RST 0x0
> > +union dlb2_sys_dir_cq_at {
> > +       struct {
> > +               u32 cq_at : 2;
> > +               u32 rsvd0 : 30;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_DIR_CQ_ISR(x) \
> > +       (0x10000fd8 + (x) * 0x1000)
> > +#define DLB2_SYS_DIR_CQ_ISR_RST 0x0
> > +union dlb2_sys_dir_cq_isr {
> > +       struct {
> > +               u32 vector : 6;
> > +               u32 vf : 4;
> > +               u32 en_code : 2;
> > +               u32 rsvd0 : 20;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_DIR_CQ2VF_PF_RO(x) \
> > +       (0x10000fd4 + (x) * 0x1000)
> > +#define DLB2_SYS_DIR_CQ2VF_PF_RO_RST 0x0
> > +union dlb2_sys_dir_cq2vf_pf_ro {
> > +       struct {
> > +               u32 vf : 4;
> > +               u32 is_pf : 1;
> > +               u32 ro : 1;
> > +               u32 rsvd0 : 26;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_DIR_PP_V(x) \
> > +       (0x10000fd0 + (x) * 0x1000)
> > +#define DLB2_SYS_DIR_PP_V_RST 0x0
> > +union dlb2_sys_dir_pp_v {
> > +       struct {
> > +               u32 pp_v : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_DIR_PP2VDEV(x) \
> > +       (0x10000fcc + (x) * 0x1000)
> > +#define DLB2_SYS_DIR_PP2VDEV_RST 0x0
> > +union dlb2_sys_dir_pp2vdev {
> > +       struct {
> > +               u32 vdev : 4;
> > +               u32 rsvd0 : 28;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_DIR_PP2VAS(x) \
> > +       (0x10000fc8 + (x) * 0x1000)
> > +#define DLB2_SYS_DIR_PP2VAS_RST 0x0
> > +union dlb2_sys_dir_pp2vas {
> > +       struct {
> > +               u32 vas : 5;
> > +               u32 rsvd0 : 27;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_DIR_CQ_ADDR_U(x) \
> > +       (0x10000fc4 + (x) * 0x1000)
> > +#define DLB2_SYS_DIR_CQ_ADDR_U_RST 0x0
> > +union dlb2_sys_dir_cq_addr_u {
> > +       struct {
> > +               u32 addr_u : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_DIR_CQ_ADDR_L(x) \
> > +       (0x10000fc0 + (x) * 0x1000)
> > +#define DLB2_SYS_DIR_CQ_ADDR_L_RST 0x0
> > +union dlb2_sys_dir_cq_addr_l {
> > +       struct {
> > +               u32 rsvd0 : 6;
> > +               u32 addr_l : 26;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_INGRESS_ALARM_ENBL 0x10000300
> > +#define DLB2_SYS_INGRESS_ALARM_ENBL_RST 0x0
> > +union dlb2_sys_ingress_alarm_enbl {
> > +       struct {
> > +               u32 illegal_hcw : 1;
> > +               u32 illegal_pp : 1;
> > +               u32 illegal_pasid : 1;
> > +               u32 illegal_qid : 1;
> > +               u32 disabled_qid : 1;
> > +               u32 illegal_ldb_qid_cfg : 1;
> > +               u32 rsvd0 : 26;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_MSIX_ACK 0x10000400
> > +#define DLB2_SYS_MSIX_ACK_RST 0x0
> > +union dlb2_sys_msix_ack {
> > +       struct {
> > +               u32 msix_0_ack : 1;
> > +               u32 msix_1_ack : 1;
> > +               u32 rsvd0 : 30;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_MSIX_PASSTHRU 0x10000404
> > +#define DLB2_SYS_MSIX_PASSTHRU_RST 0x0
> > +union dlb2_sys_msix_passthru {
> > +       struct {
> > +               u32 msix_0_passthru : 1;
> > +               u32 msix_1_passthru : 1;
> > +               u32 rsvd0 : 30;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_MSIX_MODE 0x10000408
> > +#define DLB2_SYS_MSIX_MODE_RST 0x0
> > +/* MSI-X Modes */
> > +#define DLB2_MSIX_MODE_PACKED     0
> > +#define DLB2_MSIX_MODE_COMPRESSED 1
> > +union dlb2_sys_msix_mode {
> > +       struct {
> > +               u32 mode : 1;
> > +               u32 poll_mode : 1;
> > +               u32 poll_mask : 1;
> > +               u32 poll_lock : 1;
> > +               u32 rsvd0 : 28;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS 0x10000440
> > +#define DLB2_SYS_DIR_CQ_31_0_OCC_INT_STS_RST 0x0
> > +union dlb2_sys_dir_cq_31_0_occ_int_sts {
> > +       struct {
> > +               u32 cq_0_occ_int : 1;
> > +               u32 cq_1_occ_int : 1;
> > +               u32 cq_2_occ_int : 1;
> > +               u32 cq_3_occ_int : 1;
> > +               u32 cq_4_occ_int : 1;
> > +               u32 cq_5_occ_int : 1;
> > +               u32 cq_6_occ_int : 1;
> > +               u32 cq_7_occ_int : 1;
> > +               u32 cq_8_occ_int : 1;
> > +               u32 cq_9_occ_int : 1;
> > +               u32 cq_10_occ_int : 1;
> > +               u32 cq_11_occ_int : 1;
> > +               u32 cq_12_occ_int : 1;
> > +               u32 cq_13_occ_int : 1;
> > +               u32 cq_14_occ_int : 1;
> > +               u32 cq_15_occ_int : 1;
> > +               u32 cq_16_occ_int : 1;
> > +               u32 cq_17_occ_int : 1;
> > +               u32 cq_18_occ_int : 1;
> > +               u32 cq_19_occ_int : 1;
> > +               u32 cq_20_occ_int : 1;
> > +               u32 cq_21_occ_int : 1;
> > +               u32 cq_22_occ_int : 1;
> > +               u32 cq_23_occ_int : 1;
> > +               u32 cq_24_occ_int : 1;
> > +               u32 cq_25_occ_int : 1;
> > +               u32 cq_26_occ_int : 1;
> > +               u32 cq_27_occ_int : 1;
> > +               u32 cq_28_occ_int : 1;
> > +               u32 cq_29_occ_int : 1;
> > +               u32 cq_30_occ_int : 1;
> > +               u32 cq_31_occ_int : 1;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS 0x10000444
> > +#define DLB2_SYS_DIR_CQ_63_32_OCC_INT_STS_RST 0x0
> > +union dlb2_sys_dir_cq_63_32_occ_int_sts {
> > +       struct {
> > +               u32 cq_32_occ_int : 1;
> > +               u32 cq_33_occ_int : 1;
> > +               u32 cq_34_occ_int : 1;
> > +               u32 cq_35_occ_int : 1;
> > +               u32 cq_36_occ_int : 1;
> > +               u32 cq_37_occ_int : 1;
> > +               u32 cq_38_occ_int : 1;
> > +               u32 cq_39_occ_int : 1;
> > +               u32 cq_40_occ_int : 1;
> > +               u32 cq_41_occ_int : 1;
> > +               u32 cq_42_occ_int : 1;
> > +               u32 cq_43_occ_int : 1;
> > +               u32 cq_44_occ_int : 1;
> > +               u32 cq_45_occ_int : 1;
> > +               u32 cq_46_occ_int : 1;
> > +               u32 cq_47_occ_int : 1;
> > +               u32 cq_48_occ_int : 1;
> > +               u32 cq_49_occ_int : 1;
> > +               u32 cq_50_occ_int : 1;
> > +               u32 cq_51_occ_int : 1;
> > +               u32 cq_52_occ_int : 1;
> > +               u32 cq_53_occ_int : 1;
> > +               u32 cq_54_occ_int : 1;
> > +               u32 cq_55_occ_int : 1;
> > +               u32 cq_56_occ_int : 1;
> > +               u32 cq_57_occ_int : 1;
> > +               u32 cq_58_occ_int : 1;
> > +               u32 cq_59_occ_int : 1;
> > +               u32 cq_60_occ_int : 1;
> > +               u32 cq_61_occ_int : 1;
> > +               u32 cq_62_occ_int : 1;
> > +               u32 cq_63_occ_int : 1;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS 0x10000460
> > +#define DLB2_SYS_LDB_CQ_31_0_OCC_INT_STS_RST 0x0
> > +union dlb2_sys_ldb_cq_31_0_occ_int_sts {
> > +       struct {
> > +               u32 cq_0_occ_int : 1;
> > +               u32 cq_1_occ_int : 1;
> > +               u32 cq_2_occ_int : 1;
> > +               u32 cq_3_occ_int : 1;
> > +               u32 cq_4_occ_int : 1;
> > +               u32 cq_5_occ_int : 1;
> > +               u32 cq_6_occ_int : 1;
> > +               u32 cq_7_occ_int : 1;
> > +               u32 cq_8_occ_int : 1;
> > +               u32 cq_9_occ_int : 1;
> > +               u32 cq_10_occ_int : 1;
> > +               u32 cq_11_occ_int : 1;
> > +               u32 cq_12_occ_int : 1;
> > +               u32 cq_13_occ_int : 1;
> > +               u32 cq_14_occ_int : 1;
> > +               u32 cq_15_occ_int : 1;
> > +               u32 cq_16_occ_int : 1;
> > +               u32 cq_17_occ_int : 1;
> > +               u32 cq_18_occ_int : 1;
> > +               u32 cq_19_occ_int : 1;
> > +               u32 cq_20_occ_int : 1;
> > +               u32 cq_21_occ_int : 1;
> > +               u32 cq_22_occ_int : 1;
> > +               u32 cq_23_occ_int : 1;
> > +               u32 cq_24_occ_int : 1;
> > +               u32 cq_25_occ_int : 1;
> > +               u32 cq_26_occ_int : 1;
> > +               u32 cq_27_occ_int : 1;
> > +               u32 cq_28_occ_int : 1;
> > +               u32 cq_29_occ_int : 1;
> > +               u32 cq_30_occ_int : 1;
> > +               u32 cq_31_occ_int : 1;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS 0x10000464
> > +#define DLB2_SYS_LDB_CQ_63_32_OCC_INT_STS_RST 0x0
> > +union dlb2_sys_ldb_cq_63_32_occ_int_sts {
> > +       struct {
> > +               u32 cq_32_occ_int : 1;
> > +               u32 cq_33_occ_int : 1;
> > +               u32 cq_34_occ_int : 1;
> > +               u32 cq_35_occ_int : 1;
> > +               u32 cq_36_occ_int : 1;
> > +               u32 cq_37_occ_int : 1;
> > +               u32 cq_38_occ_int : 1;
> > +               u32 cq_39_occ_int : 1;
> > +               u32 cq_40_occ_int : 1;
> > +               u32 cq_41_occ_int : 1;
> > +               u32 cq_42_occ_int : 1;
> > +               u32 cq_43_occ_int : 1;
> > +               u32 cq_44_occ_int : 1;
> > +               u32 cq_45_occ_int : 1;
> > +               u32 cq_46_occ_int : 1;
> > +               u32 cq_47_occ_int : 1;
> > +               u32 cq_48_occ_int : 1;
> > +               u32 cq_49_occ_int : 1;
> > +               u32 cq_50_occ_int : 1;
> > +               u32 cq_51_occ_int : 1;
> > +               u32 cq_52_occ_int : 1;
> > +               u32 cq_53_occ_int : 1;
> > +               u32 cq_54_occ_int : 1;
> > +               u32 cq_55_occ_int : 1;
> > +               u32 cq_56_occ_int : 1;
> > +               u32 cq_57_occ_int : 1;
> > +               u32 cq_58_occ_int : 1;
> > +               u32 cq_59_occ_int : 1;
> > +               u32 cq_60_occ_int : 1;
> > +               u32 cq_61_occ_int : 1;
> > +               u32 cq_62_occ_int : 1;
> > +               u32 cq_63_occ_int : 1;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_DIR_CQ_OPT_CLR 0x100004c0
> > +#define DLB2_SYS_DIR_CQ_OPT_CLR_RST 0x0
> > +union dlb2_sys_dir_cq_opt_clr {
> > +       struct {
> > +               u32 cq : 6;
> > +               u32 rsvd0 : 26;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_SYS_ALARM_HW_SYND 0x1000050c
> > +#define DLB2_SYS_ALARM_HW_SYND_RST 0x0
> > +union dlb2_sys_alarm_hw_synd {
> > +       struct {
> > +               u32 syndrome : 8;
> > +               u32 rtype : 2;
> > +               u32 alarm : 1;
> > +               u32 cwd : 1;
> > +               u32 vf_pf_mb : 1;
> > +               u32 rsvd0 : 1;
> > +               u32 cls : 2;
> > +               u32 aid : 6;
> > +               u32 unit : 4;
> > +               u32 source : 4;
> > +               u32 more : 1;
> > +               u32 valid : 1;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_AQED_PIPE_QID_FID_LIM(x) \
> > +       (0x20000000 + (x) * 0x1000)
> > +#define DLB2_AQED_PIPE_QID_FID_LIM_RST 0x7ff
> > +union dlb2_aqed_pipe_qid_fid_lim {
> > +       struct {
> > +               u32 qid_fid_limit : 13;
> > +               u32 rsvd0 : 19;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_AQED_PIPE_QID_HID_WIDTH(x) \
> > +       (0x20080000 + (x) * 0x1000)
> > +#define DLB2_AQED_PIPE_QID_HID_WIDTH_RST 0x0
> > +union dlb2_aqed_pipe_qid_hid_width {
> > +       struct {
> > +               u32 compress_code : 3;
> > +               u32 rsvd0 : 29;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_AQED_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATM_0 0x24000004
> > +#define DLB2_AQED_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATM_0_RST
> 0xfefcfaf8
> > +union dlb2_aqed_pipe_cfg_arb_weights_tqpri_atm_0 {
> > +       struct {
> > +               u32 pri0 : 8;
> > +               u32 pri1 : 8;
> > +               u32 pri2 : 8;
> > +               u32 pri3 : 8;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_ATM_QID2CQIDIX_00(x) \
> > +       (0x30080000 + (x) * 0x1000)
> > +#define DLB2_ATM_QID2CQIDIX_00_RST 0x0
> > +#define DLB2_ATM_QID2CQIDIX(x, y) \
> > +       (DLB2_ATM_QID2CQIDIX_00(x) + 0x80000 * (y))
> > +#define DLB2_ATM_QID2CQIDIX_NUM 16
> > +union dlb2_atm_qid2cqidix_00 {
> > +       struct {
> > +               u32 cq_p0 : 8;
> > +               u32 cq_p1 : 8;
> > +               u32 cq_p2 : 8;
> > +               u32 cq_p3 : 8;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN 0x34000004
> > +#define DLB2_ATM_CFG_ARB_WEIGHTS_RDY_BIN_RST 0xfffefdfc
> > +union dlb2_atm_cfg_arb_weights_rdy_bin {
> > +       struct {
> > +               u32 bin0 : 8;
> > +               u32 bin1 : 8;
> > +               u32 bin2 : 8;
> > +               u32 bin3 : 8;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN 0x34000008
> > +#define DLB2_ATM_CFG_ARB_WEIGHTS_SCHED_BIN_RST 0xfffefdfc
> > +union dlb2_atm_cfg_arb_weights_sched_bin {
> > +       struct {
> > +               u32 bin0 : 8;
> > +               u32 bin1 : 8;
> > +               u32 bin2 : 8;
> > +               u32 bin3 : 8;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_CFG_DIR_VAS_CRD(x) \
> > +       (0x40000000 + (x) * 0x1000)
> > +#define DLB2_CHP_CFG_DIR_VAS_CRD_RST 0x0
> > +union dlb2_chp_cfg_dir_vas_crd {
> > +       struct {
> > +               u32 count : 14;
> > +               u32 rsvd0 : 18;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_CFG_LDB_VAS_CRD(x) \
> > +       (0x40080000 + (x) * 0x1000)
> > +#define DLB2_CHP_CFG_LDB_VAS_CRD_RST 0x0
> > +union dlb2_chp_cfg_ldb_vas_crd {
> > +       struct {
> > +               u32 count : 15;
> > +               u32 rsvd0 : 17;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_ORD_QID_SN(x) \
> > +       (0x40100000 + (x) * 0x1000)
> > +#define DLB2_CHP_ORD_QID_SN_RST 0x0
> > +union dlb2_chp_ord_qid_sn {
> > +       struct {
> > +               u32 sn : 10;
> > +               u32 rsvd0 : 22;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_ORD_QID_SN_MAP(x) \
> > +       (0x40180000 + (x) * 0x1000)
> > +#define DLB2_CHP_ORD_QID_SN_MAP_RST 0x0
> > +union dlb2_chp_ord_qid_sn_map {
> > +       struct {
> > +               u32 mode : 3;
> > +               u32 slot : 4;
> > +               u32 rsvz0 : 1;
> > +               u32 grp : 1;
> > +               u32 rsvz1 : 1;
> > +               u32 rsvd0 : 22;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_SN_CHK_ENBL(x) \
> > +       (0x40200000 + (x) * 0x1000)
> > +#define DLB2_CHP_SN_CHK_ENBL_RST 0x0
> > +union dlb2_chp_sn_chk_enbl {
> > +       struct {
> > +               u32 en : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_DIR_CQ_DEPTH(x) \
> > +       (0x40280000 + (x) * 0x1000)
> > +#define DLB2_CHP_DIR_CQ_DEPTH_RST 0x0
> > +union dlb2_chp_dir_cq_depth {
> > +       struct {
> > +               u32 depth : 13;
> > +               u32 rsvd0 : 19;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
> > +       (0x40300000 + (x) * 0x1000)
> > +#define DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST 0x0
> > +union dlb2_chp_dir_cq_int_depth_thrsh {
> > +       struct {
> > +               u32 depth_threshold : 13;
> > +               u32 rsvd0 : 19;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_DIR_CQ_INT_ENB(x) \
> > +       (0x40380000 + (x) * 0x1000)
> > +#define DLB2_CHP_DIR_CQ_INT_ENB_RST 0x0
> > +union dlb2_chp_dir_cq_int_enb {
> > +       struct {
> > +               u32 en_tim : 1;
> > +               u32 en_depth : 1;
> > +               u32 rsvd0 : 30;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_DIR_CQ_TMR_THRSH(x) \
> > +       (0x40480000 + (x) * 0x1000)
> > +#define DLB2_CHP_DIR_CQ_TMR_THRSH_RST 0x1
> > +union dlb2_chp_dir_cq_tmr_thrsh {
> > +       struct {
> > +               u32 thrsh_0 : 1;
> > +               u32 thrsh_13_1 : 13;
> > +               u32 rsvd0 : 18;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
> > +       (0x40500000 + (x) * 0x1000)
> > +#define DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST 0x0
> > +union dlb2_chp_dir_cq_tkn_depth_sel {
> > +       struct {
> > +               u32 token_depth_select : 4;
> > +               u32 rsvd0 : 28;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_DIR_CQ_WD_ENB(x) \
> > +       (0x40580000 + (x) * 0x1000)
> > +#define DLB2_CHP_DIR_CQ_WD_ENB_RST 0x0
> > +union dlb2_chp_dir_cq_wd_enb {
> > +       struct {
> > +               u32 wd_enable : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_DIR_CQ_WPTR(x) \
> > +       (0x40600000 + (x) * 0x1000)
> > +#define DLB2_CHP_DIR_CQ_WPTR_RST 0x0
> > +union dlb2_chp_dir_cq_wptr {
> > +       struct {
> > +               u32 write_pointer : 13;
> > +               u32 rsvd0 : 19;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_DIR_CQ2VAS(x) \
> > +       (0x40680000 + (x) * 0x1000)
> > +#define DLB2_CHP_DIR_CQ2VAS_RST 0x0
> > +union dlb2_chp_dir_cq2vas {
> > +       struct {
> > +               u32 cq2vas : 5;
> > +               u32 rsvd0 : 27;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_HIST_LIST_BASE(x) \
> > +       (0x40700000 + (x) * 0x1000)
> > +#define DLB2_CHP_HIST_LIST_BASE_RST 0x0
> > +union dlb2_chp_hist_list_base {
> > +       struct {
> > +               u32 base : 13;
> > +               u32 rsvd0 : 19;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_HIST_LIST_LIM(x) \
> > +       (0x40780000 + (x) * 0x1000)
> > +#define DLB2_CHP_HIST_LIST_LIM_RST 0x0
> > +union dlb2_chp_hist_list_lim {
> > +       struct {
> > +               u32 limit : 13;
> > +               u32 rsvd0 : 19;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_HIST_LIST_POP_PTR(x) \
> > +       (0x40800000 + (x) * 0x1000)
> > +#define DLB2_CHP_HIST_LIST_POP_PTR_RST 0x0
> > +union dlb2_chp_hist_list_pop_ptr {
> > +       struct {
> > +               u32 pop_ptr : 13;
> > +               u32 generation : 1;
> > +               u32 rsvd0 : 18;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_HIST_LIST_PUSH_PTR(x) \
> > +       (0x40880000 + (x) * 0x1000)
> > +#define DLB2_CHP_HIST_LIST_PUSH_PTR_RST 0x0
> > +union dlb2_chp_hist_list_push_ptr {
> > +       struct {
> > +               u32 push_ptr : 13;
> > +               u32 generation : 1;
> > +               u32 rsvd0 : 18;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_LDB_CQ_DEPTH(x) \
> > +       (0x40900000 + (x) * 0x1000)
> > +#define DLB2_CHP_LDB_CQ_DEPTH_RST 0x0
> > +union dlb2_chp_ldb_cq_depth {
> > +       struct {
> > +               u32 depth : 11;
> > +               u32 rsvd0 : 21;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
> > +       (0x40980000 + (x) * 0x1000)
> > +#define DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST 0x0
> > +union dlb2_chp_ldb_cq_int_depth_thrsh {
> > +       struct {
> > +               u32 depth_threshold : 11;
> > +               u32 rsvd0 : 21;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_LDB_CQ_INT_ENB(x) \
> > +       (0x40a00000 + (x) * 0x1000)
> > +#define DLB2_CHP_LDB_CQ_INT_ENB_RST 0x0
> > +union dlb2_chp_ldb_cq_int_enb {
> > +       struct {
> > +               u32 en_tim : 1;
> > +               u32 en_depth : 1;
> > +               u32 rsvd0 : 30;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_LDB_CQ_TMR_THRSH(x) \
> > +       (0x40b00000 + (x) * 0x1000)
> > +#define DLB2_CHP_LDB_CQ_TMR_THRSH_RST 0x1
> > +union dlb2_chp_ldb_cq_tmr_thrsh {
> > +       struct {
> > +               u32 thrsh_0 : 1;
> > +               u32 thrsh_13_1 : 13;
> > +               u32 rsvd0 : 18;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
> > +       (0x40b80000 + (x) * 0x1000)
> > +#define DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST 0x0
> > +union dlb2_chp_ldb_cq_tkn_depth_sel {
> > +       struct {
> > +               u32 token_depth_select : 4;
> > +               u32 rsvd0 : 28;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_LDB_CQ_WD_ENB(x) \
> > +       (0x40c00000 + (x) * 0x1000)
> > +#define DLB2_CHP_LDB_CQ_WD_ENB_RST 0x0
> > +union dlb2_chp_ldb_cq_wd_enb {
> > +       struct {
> > +               u32 wd_enable : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_LDB_CQ_WPTR(x) \
> > +       (0x40c80000 + (x) * 0x1000)
> > +#define DLB2_CHP_LDB_CQ_WPTR_RST 0x0
> > +union dlb2_chp_ldb_cq_wptr {
> > +       struct {
> > +               u32 write_pointer : 11;
> > +               u32 rsvd0 : 21;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_LDB_CQ2VAS(x) \
> > +       (0x40d00000 + (x) * 0x1000)
> > +#define DLB2_CHP_LDB_CQ2VAS_RST 0x0
> > +union dlb2_chp_ldb_cq2vas {
> > +       struct {
> > +               u32 cq2vas : 5;
> > +               u32 rsvd0 : 27;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_CFG_CHP_CSR_CTRL 0x44000008
> > +#define DLB2_CHP_CFG_CHP_CSR_CTRL_RST 0x180002
> > +union dlb2_chp_cfg_chp_csr_ctrl {
> > +       struct {
> > +               u32 int_cor_alarm_dis : 1;
> > +               u32 int_cor_synd_dis : 1;
> > +               u32 int_uncr_alarm_dis : 1;
> > +               u32 int_unc_synd_dis : 1;
> > +               u32 int_inf0_alarm_dis : 1;
> > +               u32 int_inf0_synd_dis : 1;
> > +               u32 int_inf1_alarm_dis : 1;
> > +               u32 int_inf1_synd_dis : 1;
> > +               u32 int_inf2_alarm_dis : 1;
> > +               u32 int_inf2_synd_dis : 1;
> > +               u32 int_inf3_alarm_dis : 1;
> > +               u32 int_inf3_synd_dis : 1;
> > +               u32 int_inf4_alarm_dis : 1;
> > +               u32 int_inf4_synd_dis : 1;
> > +               u32 int_inf5_alarm_dis : 1;
> > +               u32 int_inf5_synd_dis : 1;
> > +               u32 dlb_cor_alarm_enable : 1;
> > +               u32 cfg_64bytes_qe_ldb_cq_mode : 1;
> > +               u32 cfg_64bytes_qe_dir_cq_mode : 1;
> > +               u32 pad_write_ldb : 1;
> > +               u32 pad_write_dir : 1;
> > +               u32 pad_first_write_ldb : 1;
> > +               u32 pad_first_write_dir : 1;
> > +               u32 rsvz0 : 9;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_DIR_CQ_INTR_ARMED0 0x4400005c
> > +#define DLB2_CHP_DIR_CQ_INTR_ARMED0_RST 0x0
> > +union dlb2_chp_dir_cq_intr_armed0 {
> > +       struct {
> > +               u32 armed : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_DIR_CQ_INTR_ARMED1 0x44000060
> > +#define DLB2_CHP_DIR_CQ_INTR_ARMED1_RST 0x0
> > +union dlb2_chp_dir_cq_intr_armed1 {
> > +       struct {
> > +               u32 armed : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL 0x44000084
> > +#define DLB2_CHP_CFG_DIR_CQ_TIMER_CTL_RST 0x0
> > +union dlb2_chp_cfg_dir_cq_timer_ctl {
> > +       struct {
> > +               u32 sample_interval : 8;
> > +               u32 enb : 1;
> > +               u32 rsvz0 : 23;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_CFG_DIR_WDTO_0 0x44000088
> > +#define DLB2_CHP_CFG_DIR_WDTO_0_RST 0x0
> > +union dlb2_chp_cfg_dir_wdto_0 {
> > +       struct {
> > +               u32 wdto : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_CFG_DIR_WDTO_1 0x4400008c
> > +#define DLB2_CHP_CFG_DIR_WDTO_1_RST 0x0
> > +union dlb2_chp_cfg_dir_wdto_1 {
> > +       struct {
> > +               u32 wdto : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_CFG_DIR_WD_DISABLE0 0x44000098
> > +#define DLB2_CHP_CFG_DIR_WD_DISABLE0_RST 0xffffffff
> > +union dlb2_chp_cfg_dir_wd_disable0 {
> > +       struct {
> > +               u32 wd_disable : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_CFG_DIR_WD_DISABLE1 0x4400009c
> > +#define DLB2_CHP_CFG_DIR_WD_DISABLE1_RST 0xffffffff
> > +union dlb2_chp_cfg_dir_wd_disable1 {
> > +       struct {
> > +               u32 wd_disable : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL 0x440000a0
> > +#define DLB2_CHP_CFG_DIR_WD_ENB_INTERVAL_RST 0x0
> > +union dlb2_chp_cfg_dir_wd_enb_interval {
> > +       struct {
> > +               u32 sample_interval : 28;
> > +               u32 enb : 1;
> > +               u32 rsvz0 : 3;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_CFG_DIR_WD_THRESHOLD 0x440000ac
> > +#define DLB2_CHP_CFG_DIR_WD_THRESHOLD_RST 0x0
> > +union dlb2_chp_cfg_dir_wd_threshold {
> > +       struct {
> > +               u32 wd_threshold : 8;
> > +               u32 rsvz0 : 24;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_LDB_CQ_INTR_ARMED0 0x440000b0
> > +#define DLB2_CHP_LDB_CQ_INTR_ARMED0_RST 0x0
> > +union dlb2_chp_ldb_cq_intr_armed0 {
> > +       struct {
> > +               u32 armed : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_LDB_CQ_INTR_ARMED1 0x440000b4
> > +#define DLB2_CHP_LDB_CQ_INTR_ARMED1_RST 0x0
> > +union dlb2_chp_ldb_cq_intr_armed1 {
> > +       struct {
> > +               u32 armed : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL 0x440000d8
> > +#define DLB2_CHP_CFG_LDB_CQ_TIMER_CTL_RST 0x0
> > +union dlb2_chp_cfg_ldb_cq_timer_ctl {
> > +       struct {
> > +               u32 sample_interval : 8;
> > +               u32 enb : 1;
> > +               u32 rsvz0 : 23;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_CFG_LDB_WDTO_0 0x440000dc
> > +#define DLB2_CHP_CFG_LDB_WDTO_0_RST 0x0
> > +union dlb2_chp_cfg_ldb_wdto_0 {
> > +       struct {
> > +               u32 wdto : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_CFG_LDB_WDTO_1 0x440000e0
> > +#define DLB2_CHP_CFG_LDB_WDTO_1_RST 0x0
> > +union dlb2_chp_cfg_ldb_wdto_1 {
> > +       struct {
> > +               u32 wdto : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_CFG_LDB_WD_DISABLE0 0x440000ec
> > +#define DLB2_CHP_CFG_LDB_WD_DISABLE0_RST 0xffffffff
> > +union dlb2_chp_cfg_ldb_wd_disable0 {
> > +       struct {
> > +               u32 wd_disable : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_CFG_LDB_WD_DISABLE1 0x440000f0
> > +#define DLB2_CHP_CFG_LDB_WD_DISABLE1_RST 0xffffffff
> > +union dlb2_chp_cfg_ldb_wd_disable1 {
> > +       struct {
> > +               u32 wd_disable : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL 0x440000f4
> > +#define DLB2_CHP_CFG_LDB_WD_ENB_INTERVAL_RST 0x0
> > +union dlb2_chp_cfg_ldb_wd_enb_interval {
> > +       struct {
> > +               u32 sample_interval : 28;
> > +               u32 enb : 1;
> > +               u32 rsvz0 : 3;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_CFG_LDB_WD_THRESHOLD 0x44000100
> > +#define DLB2_CHP_CFG_LDB_WD_THRESHOLD_RST 0x0
> > +union dlb2_chp_cfg_ldb_wd_threshold {
> > +       struct {
> > +               u32 wd_threshold : 8;
> > +               u32 rsvz0 : 24;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CHP_CTRL_DIAG_02 0x4c000028
> > +#define DLB2_CHP_CTRL_DIAG_02_RST 0x1555
> > +union dlb2_chp_ctrl_diag_02 {
> > +       struct {
> > +               u32 egress_credit_status_empty : 1;
> > +               u32 egress_credit_status_afull : 1;
> > +               u32 chp_outbound_hcw_pipe_credit_status_empty : 1;
> > +               u32 chp_outbound_hcw_pipe_credit_status_afull : 1;
> > +               u32 chp_lsp_ap_cmp_pipe_credit_status_empty : 1;
> > +               u32 chp_lsp_ap_cmp_pipe_credit_status_afull : 1;
> > +               u32 chp_lsp_tok_pipe_credit_status_empty : 1;
> > +               u32 chp_lsp_tok_pipe_credit_status_afull : 1;
> > +               u32 chp_rop_pipe_credit_status_empty : 1;
> > +               u32 chp_rop_pipe_credit_status_afull : 1;
> > +               u32 qed_to_cq_pipe_credit_status_empty : 1;
> > +               u32 qed_to_cq_pipe_credit_status_afull : 1;
> > +               u32 egress_lsp_token_credit_status_empty : 1;
> > +               u32 egress_lsp_token_credit_status_afull : 1;
> > +               u32 rsvd0 : 18;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0 0x54000000
> > +#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_0_RST 0xfefcfaf8
> > +union dlb2_dp_cfg_arb_weights_tqpri_dir_0 {
> > +       struct {
> > +               u32 pri0 : 8;
> > +               u32 pri1 : 8;
> > +               u32 pri2 : 8;
> > +               u32 pri3 : 8;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1 0x54000004
> > +#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_DIR_1_RST 0x0
> > +union dlb2_dp_cfg_arb_weights_tqpri_dir_1 {
> > +       struct {
> > +               u32 rsvz0 : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0 0x54000008
> > +#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfefcfaf8
> > +union dlb2_dp_cfg_arb_weights_tqpri_replay_0 {
> > +       struct {
> > +               u32 pri0 : 8;
> > +               u32 pri1 : 8;
> > +               u32 pri2 : 8;
> > +               u32 pri3 : 8;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1 0x5400000c
> > +#define DLB2_DP_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
> > +union dlb2_dp_cfg_arb_weights_tqpri_replay_1 {
> > +       struct {
> > +               u32 rsvz0 : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_DP_DIR_CSR_CTRL 0x54000010
> > +#define DLB2_DP_DIR_CSR_CTRL_RST 0x0
> > +union dlb2_dp_dir_csr_ctrl {
> > +       struct {
> > +               u32 int_cor_alarm_dis : 1;
> > +               u32 int_cor_synd_dis : 1;
> > +               u32 int_uncr_alarm_dis : 1;
> > +               u32 int_unc_synd_dis : 1;
> > +               u32 int_inf0_alarm_dis : 1;
> > +               u32 int_inf0_synd_dis : 1;
> > +               u32 int_inf1_alarm_dis : 1;
> > +               u32 int_inf1_synd_dis : 1;
> > +               u32 int_inf2_alarm_dis : 1;
> > +               u32 int_inf2_synd_dis : 1;
> > +               u32 int_inf3_alarm_dis : 1;
> > +               u32 int_inf3_synd_dis : 1;
> > +               u32 int_inf4_alarm_dis : 1;
> > +               u32 int_inf4_synd_dis : 1;
> > +               u32 int_inf5_alarm_dis : 1;
> > +               u32 int_inf5_synd_dis : 1;
> > +               u32 rsvz0 : 16;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_0 0x84000000
> > +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_0_RST
> 0xfefcfaf8
> > +union dlb2_nalb_pipe_cfg_arb_weights_tqpri_atq_0 {
> > +       struct {
> > +               u32 pri0 : 8;
> > +               u32 pri1 : 8;
> > +               u32 pri2 : 8;
> > +               u32 pri3 : 8;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_1 0x84000004
> > +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_ATQ_1_RST 0x0
> > +union dlb2_nalb_pipe_cfg_arb_weights_tqpri_atq_1 {
> > +       struct {
> > +               u32 rsvz0 : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_0 0x84000008
> > +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_0_RST
> 0xfefcfaf8
> > +union dlb2_nalb_pipe_cfg_arb_weights_tqpri_nalb_0 {
> > +       struct {
> > +               u32 pri0 : 8;
> > +               u32 pri1 : 8;
> > +               u32 pri2 : 8;
> > +               u32 pri3 : 8;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_1 0x8400000c
> > +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_NALB_1_RST 0x0
> > +union dlb2_nalb_pipe_cfg_arb_weights_tqpri_nalb_1 {
> > +       struct {
> > +               u32 rsvz0 : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0
> 0x84000010
> > +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_0_RST
> 0xfefcfaf8
> > +union dlb2_nalb_pipe_cfg_arb_weights_tqpri_replay_0 {
> > +       struct {
> > +               u32 pri0 : 8;
> > +               u32 pri1 : 8;
> > +               u32 pri2 : 8;
> > +               u32 pri3 : 8;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1
> 0x84000014
> > +#define DLB2_NALB_PIPE_CFG_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0x0
> > +union dlb2_nalb_pipe_cfg_arb_weights_tqpri_replay_1 {
> > +       struct {
> > +               u32 rsvz0 : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_RO_PIPE_GRP_0_SLT_SHFT(x) \
> > +       (0x96000000 + (x) * 0x4)
> > +#define DLB2_RO_PIPE_GRP_0_SLT_SHFT_RST 0x0
> > +union dlb2_ro_pipe_grp_0_slt_shft {
> > +       struct {
> > +               u32 change : 10;
> > +               u32 rsvd0 : 22;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_RO_PIPE_GRP_1_SLT_SHFT(x) \
> > +       (0x96010000 + (x) * 0x4)
> > +#define DLB2_RO_PIPE_GRP_1_SLT_SHFT_RST 0x0
> > +union dlb2_ro_pipe_grp_1_slt_shft {
> > +       struct {
> > +               u32 change : 10;
> > +               u32 rsvd0 : 22;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_RO_PIPE_GRP_SN_MODE 0x94000000
> > +#define DLB2_RO_PIPE_GRP_SN_MODE_RST 0x0
> > +union dlb2_ro_pipe_grp_sn_mode {
> > +       struct {
> > +               u32 sn_mode_0 : 3;
> > +               u32 rszv0 : 5;
> > +               u32 sn_mode_1 : 3;
> > +               u32 rszv1 : 21;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_RO_PIPE_CFG_CTRL_GENERAL_0 0x9c000000
> > +#define DLB2_RO_PIPE_CFG_CTRL_GENERAL_0_RST 0x0
> > +union dlb2_ro_pipe_cfg_ctrl_general_0 {
> > +       struct {
> > +               u32 unit_single_step_mode : 1;
> > +               u32 rr_en : 1;
> > +               u32 rszv0 : 30;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_CQ2PRIOV(x) \
> > +       (0xa0000000 + (x) * 0x1000)
> > +#define DLB2_LSP_CQ2PRIOV_RST 0x0
> > +union dlb2_lsp_cq2priov {
> > +       struct {
> > +               u32 prio : 24;
> > +               u32 v : 8;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_CQ2QID0(x) \
> > +       (0xa0080000 + (x) * 0x1000)
> > +#define DLB2_LSP_CQ2QID0_RST 0x0
> > +union dlb2_lsp_cq2qid0 {
> > +       struct {
> > +               u32 qid_p0 : 7;
> > +               u32 rsvd3 : 1;
> > +               u32 qid_p1 : 7;
> > +               u32 rsvd2 : 1;
> > +               u32 qid_p2 : 7;
> > +               u32 rsvd1 : 1;
> > +               u32 qid_p3 : 7;
> > +               u32 rsvd0 : 1;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_CQ2QID1(x) \
> > +       (0xa0100000 + (x) * 0x1000)
> > +#define DLB2_LSP_CQ2QID1_RST 0x0
> > +union dlb2_lsp_cq2qid1 {
> > +       struct {
> > +               u32 qid_p4 : 7;
> > +               u32 rsvd3 : 1;
> > +               u32 qid_p5 : 7;
> > +               u32 rsvd2 : 1;
> > +               u32 qid_p6 : 7;
> > +               u32 rsvd1 : 1;
> > +               u32 qid_p7 : 7;
> > +               u32 rsvd0 : 1;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_CQ_DIR_DSBL(x) \
> > +       (0xa0180000 + (x) * 0x1000)
> > +#define DLB2_LSP_CQ_DIR_DSBL_RST 0x1
> > +union dlb2_lsp_cq_dir_dsbl {
> > +       struct {
> > +               u32 disabled : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_CQ_DIR_TKN_CNT(x) \
> > +       (0xa0200000 + (x) * 0x1000)
> > +#define DLB2_LSP_CQ_DIR_TKN_CNT_RST 0x0
> > +union dlb2_lsp_cq_dir_tkn_cnt {
> > +       struct {
> > +               u32 count : 13;
> > +               u32 rsvd0 : 19;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
> > +       (0xa0280000 + (x) * 0x1000)
> > +#define DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST 0x0
> > +union dlb2_lsp_cq_dir_tkn_depth_sel_dsi {
> > +       struct {
> > +               u32 token_depth_select : 4;
> > +               u32 disable_wb_opt : 1;
> > +               u32 ignore_depth : 1;
> > +               u32 rsvd0 : 26;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(x) \
> > +       (0xa0300000 + (x) * 0x1000)
> > +#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST 0x0
> > +union dlb2_lsp_cq_dir_tot_sch_cntl {
> > +       struct {
> > +               u32 count : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(x) \
> > +       (0xa0380000 + (x) * 0x1000)
> > +#define DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST 0x0
> > +union dlb2_lsp_cq_dir_tot_sch_cnth {
> > +       struct {
> > +               u32 count : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_CQ_LDB_DSBL(x) \
> > +       (0xa0400000 + (x) * 0x1000)
> > +#define DLB2_LSP_CQ_LDB_DSBL_RST 0x1
> > +union dlb2_lsp_cq_ldb_dsbl {
> > +       struct {
> > +               u32 disabled : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_CQ_LDB_INFL_CNT(x) \
> > +       (0xa0480000 + (x) * 0x1000)
> > +#define DLB2_LSP_CQ_LDB_INFL_CNT_RST 0x0
> > +union dlb2_lsp_cq_ldb_infl_cnt {
> > +       struct {
> > +               u32 count : 12;
> > +               u32 rsvd0 : 20;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_CQ_LDB_INFL_LIM(x) \
> > +       (0xa0500000 + (x) * 0x1000)
> > +#define DLB2_LSP_CQ_LDB_INFL_LIM_RST 0x0
> > +union dlb2_lsp_cq_ldb_infl_lim {
> > +       struct {
> > +               u32 limit : 12;
> > +               u32 rsvd0 : 20;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_CQ_LDB_TKN_CNT(x) \
> > +       (0xa0580000 + (x) * 0x1000)
> > +#define DLB2_LSP_CQ_LDB_TKN_CNT_RST 0x0
> > +union dlb2_lsp_cq_ldb_tkn_cnt {
> > +       struct {
> > +               u32 token_count : 11;
> > +               u32 rsvd0 : 21;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
> > +       (0xa0600000 + (x) * 0x1000)
> > +#define DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST 0x0
> > +union dlb2_lsp_cq_ldb_tkn_depth_sel {
> > +       struct {
> > +               u32 token_depth_select : 4;
> > +               u32 ignore_depth : 1;
> > +               u32 rsvd0 : 27;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(x) \
> > +       (0xa0680000 + (x) * 0x1000)
> > +#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST 0x0
> > +union dlb2_lsp_cq_ldb_tot_sch_cntl {
> > +       struct {
> > +               u32 count : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(x) \
> > +       (0xa0700000 + (x) * 0x1000)
> > +#define DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST 0x0
> > +union dlb2_lsp_cq_ldb_tot_sch_cnth {
> > +       struct {
> > +               u32 count : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_QID_DIR_MAX_DEPTH(x) \
> > +       (0xa0780000 + (x) * 0x1000)
> > +#define DLB2_LSP_QID_DIR_MAX_DEPTH_RST 0x0
> > +union dlb2_lsp_qid_dir_max_depth {
> > +       struct {
> > +               u32 depth : 13;
> > +               u32 rsvd0 : 19;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(x) \
> > +       (0xa0800000 + (x) * 0x1000)
> > +#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST 0x0
> > +union dlb2_lsp_qid_dir_tot_enq_cntl {
> > +       struct {
> > +               u32 count : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(x) \
> > +       (0xa0880000 + (x) * 0x1000)
> > +#define DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST 0x0
> > +union dlb2_lsp_qid_dir_tot_enq_cnth {
> > +       struct {
> > +               u32 count : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_QID_DIR_ENQUEUE_CNT(x) \
> > +       (0xa0900000 + (x) * 0x1000)
> > +#define DLB2_LSP_QID_DIR_ENQUEUE_CNT_RST 0x0
> > +union dlb2_lsp_qid_dir_enqueue_cnt {
> > +       struct {
> > +               u32 count : 13;
> > +               u32 rsvd0 : 19;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_QID_DIR_DEPTH_THRSH(x) \
> > +       (0xa0980000 + (x) * 0x1000)
> > +#define DLB2_LSP_QID_DIR_DEPTH_THRSH_RST 0x0
> > +union dlb2_lsp_qid_dir_depth_thrsh {
> > +       struct {
> > +               u32 thresh : 13;
> > +               u32 rsvd0 : 19;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_QID_AQED_ACTIVE_CNT(x) \
> > +       (0xa0a00000 + (x) * 0x1000)
> > +#define DLB2_LSP_QID_AQED_ACTIVE_CNT_RST 0x0
> > +union dlb2_lsp_qid_aqed_active_cnt {
> > +       struct {
> > +               u32 count : 12;
> > +               u32 rsvd0 : 20;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_QID_AQED_ACTIVE_LIM(x) \
> > +       (0xa0a80000 + (x) * 0x1000)
> > +#define DLB2_LSP_QID_AQED_ACTIVE_LIM_RST 0x0
> > +union dlb2_lsp_qid_aqed_active_lim {
> > +       struct {
> > +               u32 limit : 12;
> > +               u32 rsvd0 : 20;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(x) \
> > +       (0xa0b00000 + (x) * 0x1000)
> > +#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST 0x0
> > +union dlb2_lsp_qid_atm_tot_enq_cntl {
> > +       struct {
> > +               u32 count : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(x) \
> > +       (0xa0b80000 + (x) * 0x1000)
> > +#define DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST 0x0
> > +union dlb2_lsp_qid_atm_tot_enq_cnth {
> > +       struct {
> > +               u32 count : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_QID_ATQ_ENQUEUE_CNT(x) \
> > +       (0xa0c00000 + (x) * 0x1000)
> > +#define DLB2_LSP_QID_ATQ_ENQUEUE_CNT_RST 0x0
> > +union dlb2_lsp_qid_atq_enqueue_cnt {
> > +       struct {
> > +               u32 count : 14;
> > +               u32 rsvd0 : 18;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_QID_LDB_ENQUEUE_CNT(x) \
> > +       (0xa0c80000 + (x) * 0x1000)
> > +#define DLB2_LSP_QID_LDB_ENQUEUE_CNT_RST 0x0
> > +union dlb2_lsp_qid_ldb_enqueue_cnt {
> > +       struct {
> > +               u32 count : 14;
> > +               u32 rsvd0 : 18;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_QID_LDB_INFL_CNT(x) \
> > +       (0xa0d00000 + (x) * 0x1000)
> > +#define DLB2_LSP_QID_LDB_INFL_CNT_RST 0x0
> > +union dlb2_lsp_qid_ldb_infl_cnt {
> > +       struct {
> > +               u32 count : 12;
> > +               u32 rsvd0 : 20;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_QID_LDB_INFL_LIM(x) \
> > +       (0xa0d80000 + (x) * 0x1000)
> > +#define DLB2_LSP_QID_LDB_INFL_LIM_RST 0x0
> > +union dlb2_lsp_qid_ldb_infl_lim {
> > +       struct {
> > +               u32 limit : 12;
> > +               u32 rsvd0 : 20;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_QID2CQIDIX_00(x) \
> > +       (0xa0e00000 + (x) * 0x1000)
> > +#define DLB2_LSP_QID2CQIDIX_00_RST 0x0
> > +#define DLB2_LSP_QID2CQIDIX(x, y) \
> > +       (DLB2_LSP_QID2CQIDIX_00(x) + 0x80000 * (y))
> > +#define DLB2_LSP_QID2CQIDIX_NUM 16
> > +union dlb2_lsp_qid2cqidix_00 {
> > +       struct {
> > +               u32 cq_p0 : 8;
> > +               u32 cq_p1 : 8;
> > +               u32 cq_p2 : 8;
> > +               u32 cq_p3 : 8;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_QID2CQIDIX2_00(x) \
> > +       (0xa1600000 + (x) * 0x1000)
> > +#define DLB2_LSP_QID2CQIDIX2_00_RST 0x0
> > +#define DLB2_LSP_QID2CQIDIX2(x, y) \
> > +       (DLB2_LSP_QID2CQIDIX2_00(x) + 0x80000 * (y))
> > +#define DLB2_LSP_QID2CQIDIX2_NUM 16
> > +union dlb2_lsp_qid2cqidix2_00 {
> > +       struct {
> > +               u32 cq_p0 : 8;
> > +               u32 cq_p1 : 8;
> > +               u32 cq_p2 : 8;
> > +               u32 cq_p3 : 8;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_QID_LDB_REPLAY_CNT(x) \
> > +       (0xa1e00000 + (x) * 0x1000)
> > +#define DLB2_LSP_QID_LDB_REPLAY_CNT_RST 0x0
> > +union dlb2_lsp_qid_ldb_replay_cnt {
> > +       struct {
> > +               u32 count : 14;
> > +               u32 rsvd0 : 18;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_QID_NALDB_MAX_DEPTH(x) \
> > +       (0xa1f00000 + (x) * 0x1000)
> > +#define DLB2_LSP_QID_NALDB_MAX_DEPTH_RST 0x0
> > +union dlb2_lsp_qid_naldb_max_depth {
> > +       struct {
> > +               u32 depth : 14;
> > +               u32 rsvd0 : 18;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(x) \
> > +       (0xa1f80000 + (x) * 0x1000)
> > +#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST 0x0
> > +union dlb2_lsp_qid_naldb_tot_enq_cntl {
> > +       struct {
> > +               u32 count : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(x) \
> > +       (0xa2000000 + (x) * 0x1000)
> > +#define DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST 0x0
> > +union dlb2_lsp_qid_naldb_tot_enq_cnth {
> > +       struct {
> > +               u32 count : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_QID_ATM_DEPTH_THRSH(x) \
> > +       (0xa2080000 + (x) * 0x1000)
> > +#define DLB2_LSP_QID_ATM_DEPTH_THRSH_RST 0x0
> > +union dlb2_lsp_qid_atm_depth_thrsh {
> > +       struct {
> > +               u32 thresh : 14;
> > +               u32 rsvd0 : 18;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_QID_NALDB_DEPTH_THRSH(x) \
> > +       (0xa2100000 + (x) * 0x1000)
> > +#define DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST 0x0
> > +union dlb2_lsp_qid_naldb_depth_thrsh {
> > +       struct {
> > +               u32 thresh : 14;
> > +               u32 rsvd0 : 18;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_QID_ATM_ACTIVE(x) \
> > +       (0xa2180000 + (x) * 0x1000)
> > +#define DLB2_LSP_QID_ATM_ACTIVE_RST 0x0
> > +union dlb2_lsp_qid_atm_active {
> > +       struct {
> > +               u32 count : 14;
> > +               u32 rsvd0 : 18;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0xa4000008
> > +#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_RST 0x0
> > +union dlb2_lsp_cfg_arb_weight_atm_nalb_qid_0 {
> > +       struct {
> > +               u32 pri0_weight : 8;
> > +               u32 pri1_weight : 8;
> > +               u32 pri2_weight : 8;
> > +               u32 pri3_weight : 8;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0xa400000c
> > +#define DLB2_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RST 0x0
> > +union dlb2_lsp_cfg_arb_weight_atm_nalb_qid_1 {
> > +       struct {
> > +               u32 rsvz0 : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0 0xa4000014
> > +#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_0_RST 0x0
> > +union dlb2_lsp_cfg_arb_weight_ldb_qid_0 {
> > +       struct {
> > +               u32 pri0_weight : 8;
> > +               u32 pri1_weight : 8;
> > +               u32 pri2_weight : 8;
> > +               u32 pri3_weight : 8;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1 0xa4000018
> > +#define DLB2_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RST 0x0
> > +union dlb2_lsp_cfg_arb_weight_ldb_qid_1 {
> > +       struct {
> > +               u32 rsvz0 : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_LDB_SCHED_CTRL 0xa400002c
> > +#define DLB2_LSP_LDB_SCHED_CTRL_RST 0x0
> > +union dlb2_lsp_ldb_sched_ctrl {
> > +       struct {
> > +               u32 cq : 8;
> > +               u32 qidix : 3;
> > +               u32 value : 1;
> > +               u32 nalb_haswork_v : 1;
> > +               u32 rlist_haswork_v : 1;
> > +               u32 slist_haswork_v : 1;
> > +               u32 inflight_ok_v : 1;
> > +               u32 aqed_nfull_v : 1;
> > +               u32 rsvz0 : 15;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_DIR_SCH_CNT_L 0xa4000034
> > +#define DLB2_LSP_DIR_SCH_CNT_L_RST 0x0
> > +union dlb2_lsp_dir_sch_cnt_l {
> > +       struct {
> > +               u32 count : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_DIR_SCH_CNT_H 0xa4000038
> > +#define DLB2_LSP_DIR_SCH_CNT_H_RST 0x0
> > +union dlb2_lsp_dir_sch_cnt_h {
> > +       struct {
> > +               u32 count : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_LDB_SCH_CNT_L 0xa400003c
> > +#define DLB2_LSP_LDB_SCH_CNT_L_RST 0x0
> > +union dlb2_lsp_ldb_sch_cnt_l {
> > +       struct {
> > +               u32 count : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_LDB_SCH_CNT_H 0xa4000040
> > +#define DLB2_LSP_LDB_SCH_CNT_H_RST 0x0
> > +union dlb2_lsp_ldb_sch_cnt_h {
> > +       struct {
> > +               u32 count : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_CFG_SHDW_CTRL 0xa4000070
> > +#define DLB2_LSP_CFG_SHDW_CTRL_RST 0x0
> > +union dlb2_lsp_cfg_shdw_ctrl {
> > +       struct {
> > +               u32 transfer : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_CFG_SHDW_RANGE_COS(x) \
> > +       (0xa4000074 + (x) * 4)
> > +#define DLB2_LSP_CFG_SHDW_RANGE_COS_RST 0x40
> > +union dlb2_lsp_cfg_shdw_range_cos {
> > +       struct {
> > +               u32 bw_range : 9;
> > +               u32 rsvz0 : 22;
> > +               u32 no_extra_credit : 1;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_LSP_CFG_CTRL_GENERAL_0 0xac000000
> > +#define DLB2_LSP_CFG_CTRL_GENERAL_0_RST 0x0
> > +union dlb2_lsp_cfg_ctrl_general_0 {
> > +       struct {
> > +               u32 disab_atq_empty_arb : 1;
> > +               u32 inc_tok_unit_idle : 1;
> > +               u32 disab_rlist_pri : 1;
> > +               u32 inc_cmp_unit_idle : 1;
> > +               u32 rsvz0 : 2;
> > +               u32 dir_single_op : 1;
> > +               u32 dir_half_bw : 1;
> > +               u32 dir_single_out : 1;
> > +               u32 dir_disab_multi : 1;
> > +               u32 atq_single_op : 1;
> > +               u32 atq_half_bw : 1;
> > +               u32 atq_single_out : 1;
> > +               u32 atq_disab_multi : 1;
> > +               u32 dirrpl_single_op : 1;
> > +               u32 dirrpl_half_bw : 1;
> > +               u32 dirrpl_single_out : 1;
> > +               u32 lbrpl_single_op : 1;
> > +               u32 lbrpl_half_bw : 1;
> > +               u32 lbrpl_single_out : 1;
> > +               u32 ldb_single_op : 1;
> > +               u32 ldb_half_bw : 1;
> > +               u32 ldb_disab_multi : 1;
> > +               u32 atm_single_sch : 1;
> > +               u32 atm_single_cmp : 1;
> > +               u32 ldb_ce_tog_arb : 1;
> > +               u32 rsvz1 : 1;
> > +               u32 smon0_valid_sel : 2;
> > +               u32 smon0_value_sel : 1;
> > +               u32 smon0_compare_sel : 2;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CFG_MSTR_DIAG_RESET_STS 0xb4000000
> > +#define DLB2_CFG_MSTR_DIAG_RESET_STS_RST 0x80000bff
> > +union dlb2_cfg_mstr_diag_reset_sts {
> > +       struct {
> > +               u32 chp_pf_reset_done : 1;
> > +               u32 rop_pf_reset_done : 1;
> > +               u32 lsp_pf_reset_done : 1;
> > +               u32 nalb_pf_reset_done : 1;
> > +               u32 ap_pf_reset_done : 1;
> > +               u32 dp_pf_reset_done : 1;
> > +               u32 qed_pf_reset_done : 1;
> > +               u32 dqed_pf_reset_done : 1;
> > +               u32 aqed_pf_reset_done : 1;
> > +               u32 sys_pf_reset_done : 1;
> > +               u32 pf_reset_active : 1;
> > +               u32 flrsm_state : 7;
> > +               u32 rsvd0 : 13;
> > +               u32 dlb_proc_reset_done : 1;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CFG_MSTR_CFG_DIAGNOSTIC_IDLE_STATUS 0xb4000004
> > +#define DLB2_CFG_MSTR_CFG_DIAGNOSTIC_IDLE_STATUS_RST 0x9d0fffff
> > +union dlb2_cfg_mstr_cfg_diagnostic_idle_status {
> > +       struct {
> > +               u32 chp_pipeidle : 1;
> > +               u32 rop_pipeidle : 1;
> > +               u32 lsp_pipeidle : 1;
> > +               u32 nalb_pipeidle : 1;
> > +               u32 ap_pipeidle : 1;
> > +               u32 dp_pipeidle : 1;
> > +               u32 qed_pipeidle : 1;
> > +               u32 dqed_pipeidle : 1;
> > +               u32 aqed_pipeidle : 1;
> > +               u32 sys_pipeidle : 1;
> > +               u32 chp_unit_idle : 1;
> > +               u32 rop_unit_idle : 1;
> > +               u32 lsp_unit_idle : 1;
> > +               u32 nalb_unit_idle : 1;
> > +               u32 ap_unit_idle : 1;
> > +               u32 dp_unit_idle : 1;
> > +               u32 qed_unit_idle : 1;
> > +               u32 dqed_unit_idle : 1;
> > +               u32 aqed_unit_idle : 1;
> > +               u32 sys_unit_idle : 1;
> > +               u32 rsvd1 : 4;
> > +               u32 mstr_cfg_ring_idle : 1;
> > +               u32 mstr_cfg_mstr_idle : 1;
> > +               u32 mstr_flr_clkreq_b : 1;
> > +               u32 mstr_proc_idle : 1;
> > +               u32 mstr_proc_idle_masked : 1;
> > +               u32 rsvd0 : 2;
> > +               u32 dlb_func_idle : 1;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CFG_MSTR_CFG_PM_STATUS 0xb4000014
> > +#define DLB2_CFG_MSTR_CFG_PM_STATUS_RST 0x100403e
> > +union dlb2_cfg_mstr_cfg_pm_status {
> > +       struct {
> > +               u32 prochot : 1;
> > +               u32 pgcb_dlb_idle : 1;
> > +               u32 pgcb_dlb_pg_rdy_ack_b : 1;
> > +               u32 pmsm_pgcb_req_b : 1;
> > +               u32 pgbc_pmc_pg_req_b : 1;
> > +               u32 pmc_pgcb_pg_ack_b : 1;
> > +               u32 pmc_pgcb_fet_en_b : 1;
> > +               u32 pgcb_fet_en_b : 1;
> > +               u32 rsvz0 : 1;
> > +               u32 rsvz1 : 1;
> > +               u32 fuse_force_on : 1;
> > +               u32 fuse_proc_disable : 1;
> > +               u32 rsvz2 : 1;
> > +               u32 rsvz3 : 1;
> > +               u32 pm_fsm_d0tod3_ok : 1;
> > +               u32 pm_fsm_d3tod0_ok : 1;
> > +               u32 dlb_in_d3 : 1;
> > +               u32 rsvz4 : 7;
> > +               u32 pmsm : 8;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE 0xb4000018
> > +#define DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE_RST 0x1
> > +union dlb2_cfg_mstr_cfg_pm_pmcsr_disable {
> > +       struct {
> > +               u32 disable : 1;
> > +               u32 rsvz0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_FUNC_VF_VF2PF_MAILBOX_BYTES 256
> > +#define DLB2_FUNC_VF_VF2PF_MAILBOX(x) \
> > +       (0x1000 + (x) * 0x4)
> > +#define DLB2_FUNC_VF_VF2PF_MAILBOX_RST 0x0
> > +union dlb2_func_vf_vf2pf_mailbox {
> > +       struct {
> > +               u32 msg : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_FUNC_VF_VF2PF_MAILBOX_ISR 0x1f00
> > +#define DLB2_FUNC_VF_VF2PF_MAILBOX_ISR_RST 0x0
> > +#define DLB2_FUNC_VF_SIOV_VF2PF_MAILBOX_ISR_TRIGGER 0x8000
> > +union dlb2_func_vf_vf2pf_mailbox_isr {
> > +       struct {
> > +               u32 isr : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_FUNC_VF_PF2VF_MAILBOX_BYTES 64
> > +#define DLB2_FUNC_VF_PF2VF_MAILBOX(x) \
> > +       (0x2000 + (x) * 0x4)
> > +#define DLB2_FUNC_VF_PF2VF_MAILBOX_RST 0x0
> > +union dlb2_func_vf_pf2vf_mailbox {
> > +       struct {
> > +               u32 msg : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_FUNC_VF_PF2VF_MAILBOX_ISR 0x2f00
> > +#define DLB2_FUNC_VF_PF2VF_MAILBOX_ISR_RST 0x0
> > +union dlb2_func_vf_pf2vf_mailbox_isr {
> > +       struct {
> > +               u32 pf_isr : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_FUNC_VF_VF_MSI_ISR_PEND 0x2f10
> > +#define DLB2_FUNC_VF_VF_MSI_ISR_PEND_RST 0x0
> > +union dlb2_func_vf_vf_msi_isr_pend {
> > +       struct {
> > +               u32 isr_pend : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_FUNC_VF_VF_RESET_IN_PROGRESS 0x3000
> > +#define DLB2_FUNC_VF_VF_RESET_IN_PROGRESS_RST 0x1
> > +union dlb2_func_vf_vf_reset_in_progress {
> > +       struct {
> > +               u32 reset_in_progress : 1;
> > +               u32 rsvd0 : 31;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#define DLB2_FUNC_VF_VF_MSI_ISR 0x4000
> > +#define DLB2_FUNC_VF_VF_MSI_ISR_RST 0x0
> > +union dlb2_func_vf_vf_msi_isr {
> > +       struct {
> > +               u32 vf_msi_isr : 32;
> > +       } field;
> > +       u32 val;
> > +};
> > +
> > +#endif /* __DLB2_REGS_H */
> > diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c
> b/drivers/event/dlb2/pf/base/dlb2_resource.c
> > new file mode 100644
> > index 0000000..6de8b95
> > --- /dev/null
> > +++ b/drivers/event/dlb2/pf/base/dlb2_resource.c
> > @@ -0,0 +1,274 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2016-2020 Intel Corporation
> > + */
> > +
> > +#include "dlb2_user.h"
> > +
> > +#include "dlb2_hw_types.h"
> > +#include "dlb2_mbox.h"
> > +#include "dlb2_osdep.h"
> > +#include "dlb2_osdep_bitmap.h"
> > +#include "dlb2_osdep_types.h"
> > +#include "dlb2_regs.h"
> > +#include "dlb2_resource.h"
> > +
> > +static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)
> > +{
> > +       int i;
> > +
> > +       dlb2_list_init_head(&domain->used_ldb_queues);
> > +       dlb2_list_init_head(&domain->used_dir_pq_pairs);
> > +       dlb2_list_init_head(&domain->avail_ldb_queues);
> > +       dlb2_list_init_head(&domain->avail_dir_pq_pairs);
> > +
> > +       for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
> > +               dlb2_list_init_head(&domain->used_ldb_ports[i]);
> > +       for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
> > +               dlb2_list_init_head(&domain->avail_ldb_ports[i]);
> > +}
> > +
> > +static void dlb2_init_fn_rsrc_lists(struct dlb2_function_resources *rsrc)
> > +{
> > +       int i;
> > +
> > +       dlb2_list_init_head(&rsrc->avail_domains);
> > +       dlb2_list_init_head(&rsrc->used_domains);
> > +       dlb2_list_init_head(&rsrc->avail_ldb_queues);
> > +       dlb2_list_init_head(&rsrc->avail_dir_pq_pairs);
> > +
> > +       for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
> > +               dlb2_list_init_head(&rsrc->avail_ldb_ports[i]);
> > +}
> > +
> > +void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)
> > +{
> > +       union dlb2_chp_cfg_chp_csr_ctrl r0;
> > +
> > +       r0.val = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
> > +
> > +       r0.field.cfg_64bytes_qe_dir_cq_mode = 1;
> > +
> > +       DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
> > +}
> > +
> > +int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
> > +                             struct dlb2_get_num_resources_args *arg,
> > +                             bool vdev_req,
> > +                             unsigned int vdev_id)
> > +{
> > +       struct dlb2_function_resources *rsrcs;
> > +       struct dlb2_bitmap *map;
> > +       int i;
> > +
> > +       if (vdev_req && vdev_id >= DLB2_MAX_NUM_VDEVS)
> > +               return -EINVAL;
> > +
> > +       if (vdev_req)
> > +               rsrcs = &hw->vdev[vdev_id];
> > +       else
> > +               rsrcs = &hw->pf;
> > +
> > +       arg->num_sched_domains = rsrcs->num_avail_domains;
> > +
> > +       arg->num_ldb_queues = rsrcs->num_avail_ldb_queues;
> > +
> > +       arg->num_ldb_ports = 0;
> > +       for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
> > +               arg->num_ldb_ports += rsrcs->num_avail_ldb_ports[i];
> > +
> > +       arg->num_cos_ldb_ports[0] = rsrcs->num_avail_ldb_ports[0];
> > +       arg->num_cos_ldb_ports[1] = rsrcs->num_avail_ldb_ports[1];
> > +       arg->num_cos_ldb_ports[2] = rsrcs->num_avail_ldb_ports[2];
> > +       arg->num_cos_ldb_ports[3] = rsrcs->num_avail_ldb_ports[3];
> > +
> > +       arg->num_dir_ports = rsrcs->num_avail_dir_pq_pairs;
> > +
> > +       arg->num_atomic_inflights = rsrcs->num_avail_aqed_entries;
> > +
> > +       map = rsrcs->avail_hist_list_entries;
> > +
> > +       arg->num_hist_list_entries = dlb2_bitmap_count(map);
> > +
> > +       arg->max_contiguous_hist_list_entries =
> > +               dlb2_bitmap_longest_set_range(map);
> > +
> > +       arg->num_ldb_credits = rsrcs->num_avail_qed_entries;
> > +
> > +       arg->num_dir_credits = rsrcs->num_avail_dqed_entries;
> > +
> > +       return 0;
> > +}
> > +
> > +void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)
> > +{
> > +       union dlb2_chp_cfg_chp_csr_ctrl r0;
> > +
> > +       r0.val = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);
> > +
> > +       r0.field.cfg_64bytes_qe_ldb_cq_mode = 1;
> > +
> > +       DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val);
> > +}
> > +
> > +void dlb2_resource_free(struct dlb2_hw *hw)
> > +{
> > +       int i;
> > +
> > +       if (hw->pf.avail_hist_list_entries)
> > +               dlb2_bitmap_free(hw->pf.avail_hist_list_entries);
> > +
> > +       for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
> > +               if (hw->vdev[i].avail_hist_list_entries)
> > +                       dlb2_bitmap_free(hw->vdev[i].avail_hist_list_entries);
> > +       }
> > +}
> > +
> > +int dlb2_resource_init(struct dlb2_hw *hw)
> > +{
> > +       struct dlb2_list_entry *list;
> > +       unsigned int i;
> > +       int ret;
> > +
> > +       /*
> > +        * For optimal load-balancing, ports that map to one or more QIDs in
> > +        * common should not be in numerical sequence. This is application
> > +        * dependent, but the driver interleaves port IDs as much as possible
> > +        * to reduce the likelihood of this. This initial allocation maximizes
> > +        * the average distance between an ID and its immediate neighbors (i.e.
> > +        * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to
> > +        * 3, etc.).
> > +        */
> > +       u8 init_ldb_port_allocation[DLB2_MAX_NUM_LDB_PORTS] = {
> > +               0,  7,  14,  5, 12,  3, 10,  1,  8, 15,  6, 13,  4, 11,  2,  9,
> > +               16, 23, 30, 21, 28, 19, 26, 17, 24, 31, 22, 29, 20, 27, 18, 25,
> > +               32, 39, 46, 37, 44, 35, 42, 33, 40, 47, 38, 45, 36, 43, 34, 41,
> > +               48, 55, 62, 53, 60, 51, 58, 49, 56, 63, 54, 61, 52, 59, 50, 57,
> > +       };
> > +
> > +       /* Zero-out resource tracking data structures */
> > +       memset(&hw->rsrcs, 0, sizeof(hw->rsrcs));
> > +       memset(&hw->pf, 0, sizeof(hw->pf));
> > +
> > +       dlb2_init_fn_rsrc_lists(&hw->pf);
> > +
> > +       for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
> > +               memset(&hw->vdev[i], 0, sizeof(hw->vdev[i]));
> > +               dlb2_init_fn_rsrc_lists(&hw->vdev[i]);
> > +       }
> > +
> > +       for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
> > +               memset(&hw->domains[i], 0, sizeof(hw->domains[i]));
> > +               dlb2_init_domain_rsrc_lists(&hw->domains[i]);
> > +               hw->domains[i].parent_func = &hw->pf;
> > +       }
> > +
> > +       /* Give all resources to the PF driver */
> > +       hw->pf.num_avail_domains = DLB2_MAX_NUM_DOMAINS;
> > +       for (i = 0; i < hw->pf.num_avail_domains; i++) {
> > +               list = &hw->domains[i].func_list;
> > +
> > +               dlb2_list_add(&hw->pf.avail_domains, list);
> > +       }
> > +
> > +       hw->pf.num_avail_ldb_queues = DLB2_MAX_NUM_LDB_QUEUES;
> > +       for (i = 0; i < hw->pf.num_avail_ldb_queues; i++) {
> > +               list = &hw->rsrcs.ldb_queues[i].func_list;
> > +
> > +               dlb2_list_add(&hw->pf.avail_ldb_queues, list);
> > +       }
> > +
> > +       for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
> > +               hw->pf.num_avail_ldb_ports[i] =
> > +                       DLB2_MAX_NUM_LDB_PORTS / DLB2_NUM_COS_DOMAINS;
> > +
> > +       for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
> > +               int cos_id = i >> DLB2_NUM_COS_DOMAINS;
> > +               struct dlb2_ldb_port *port;
> > +
> > +               port = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]];
> > +
> > +               dlb2_list_add(&hw->pf.avail_ldb_ports[cos_id],
> > +                             &port->func_list);
> > +       }
> > +
> > +       hw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS;
> > +       for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
> > +               list = &hw->rsrcs.dir_pq_pairs[i].func_list;
> > +
> > +               dlb2_list_add(&hw->pf.avail_dir_pq_pairs, list);
> > +       }
> > +
> > +       hw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;
> > +       hw->pf.num_avail_dqed_entries = DLB2_MAX_NUM_DIR_CREDITS;
> > +       hw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;
> > +
> > +       ret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,
> > +                               DLB2_MAX_NUM_HIST_LIST_ENTRIES);
> > +       if (ret)
> > +               goto unwind;
> > +
> > +       ret = dlb2_bitmap_fill(hw->pf.avail_hist_list_entries);
> > +       if (ret)
> > +               goto unwind;
> > +
> > +       for (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {
> > +               ret = dlb2_bitmap_alloc(&hw->vdev[i].avail_hist_list_entries,
> > +                                       DLB2_MAX_NUM_HIST_LIST_ENTRIES);
> > +               if (ret)
> > +                       goto unwind;
> > +
> > +               ret = dlb2_bitmap_zero(hw->vdev[i].avail_hist_list_entries);
> > +               if (ret)
> > +                       goto unwind;
> > +       }
> > +
> > +       /* Initialize the hardware resource IDs */
> > +       for (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {
> > +               hw->domains[i].id.phys_id = i;
> > +               hw->domains[i].id.vdev_owned = false;
> > +       }
> > +
> > +       for (i = 0; i < DLB2_MAX_NUM_LDB_QUEUES; i++) {
> > +               hw->rsrcs.ldb_queues[i].id.phys_id = i;
> > +               hw->rsrcs.ldb_queues[i].id.vdev_owned = false;
> > +       }
> > +
> > +       for (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {
> > +               hw->rsrcs.ldb_ports[i].id.phys_id = i;
> > +               hw->rsrcs.ldb_ports[i].id.vdev_owned = false;
> > +       }
> > +
> > +       for (i = 0; i < DLB2_MAX_NUM_DIR_PORTS; i++) {
> > +               hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
> > +               hw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;
> > +       }
> > +
> > +       for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
> > +               hw->rsrcs.sn_groups[i].id = i;
> > +               /* Default mode (0) is 64 sequence numbers per queue */
> > +               hw->rsrcs.sn_groups[i].mode = 0;
> > +               hw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 64;
> > +               hw->rsrcs.sn_groups[i].slot_use_bitmap = 0;
> > +       }
> > +
> > +       for (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)
> > +               hw->cos_reservation[i] = 100 / DLB2_NUM_COS_DOMAINS;
> > +
> > +       return 0;
> > +
> > +unwind:
> > +       dlb2_resource_free(hw);
> > +
> > +       return ret;
> > +}
> > +
> > +void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw)
> > +{
> > +       union dlb2_cfg_mstr_cfg_pm_pmcsr_disable r0;
> > +
> > +       r0.val = DLB2_CSR_RD(hw, DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE);
> > +
> > +       r0.field.disable = 0;
> > +
> > +       DLB2_CSR_WR(hw, DLB2_CFG_MSTR_CFG_PM_PMCSR_DISABLE, r0.val);
> > +}
> > diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.h
> b/drivers/event/dlb2/pf/base/dlb2_resource.h
> > new file mode 100644
> > index 0000000..503fdf3
> > --- /dev/null
> > +++ b/drivers/event/dlb2/pf/base/dlb2_resource.h
> > @@ -0,0 +1,1913 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2016-2020 Intel Corporation
> > + */
> > +
> > +#ifndef __DLB2_RESOURCE_H
> > +#define __DLB2_RESOURCE_H
> > +
> > +#include "dlb2_user.h"
> > +
> > +#include "dlb2_hw_types.h"
> > +#include "dlb2_osdep_types.h"
> > +
> > +/**
> > + * dlb2_resource_init() - initialize the device
> > + * @hw: pointer to struct dlb2_hw.
> > + *
> > + * This function initializes the device's software state (pointed to by the hw
> > + * argument) and programs global scheduling QoS registers. This function
> should
> > + * be called during driver initialization.
> > + *
> > + * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the
> > + * device is reset.
> > + *
> > + * Return:
> > + * Returns 0 upon success, <0 otherwise.
> > + */
> > +int dlb2_resource_init(struct dlb2_hw *hw);
> > +
> > +/**
> > + * dlb2_resource_free() - free device state memory
> > + * @hw: dlb2_hw handle for a particular device.
> > + *
> > + * This function frees software state pointed to by dlb2_hw. This function
> > + * should be called when resetting the device or unloading the driver.
> > + */
> > +void dlb2_resource_free(struct dlb2_hw *hw);
> > +
> > +/**
> > + * dlb2_resource_reset() - reset in-use resources to their initial state
> > + * @hw: dlb2_hw handle for a particular device.
> > + *
> > + * This function resets in-use resources, and makes them available for use.
> > + * All resources go back to their owning function, whether a PF or a VF.
> > + */
> > +void dlb2_resource_reset(struct dlb2_hw *hw);
> > +
> > +/**
> > + * dlb2_hw_create_sched_domain() - create a scheduling domain
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @args: scheduling domain creation arguments.
> > + * @resp: response structure.
> > + * @vdev_request: indicates whether this request came from a vdev.
> > + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> > + *
> > + * This function creates a scheduling domain containing the resources
> specified
> > + * in args. The individual resources (queues, ports, credits) can be configured
> > + * after creating a scheduling domain.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> > + * assigned a detailed error code from enum dlb2_error. If successful, resp-
> >id
> > + * contains the domain ID.
> > + *
> > + * resp->id contains a virtual ID if vdev_request is true.
> > + *
> > + * Errors:
> > + * EINVAL - A requested resource is unavailable, or the requested domain
> name
> > + *         is already in use.
> > + * EFAULT - Internal error (resp->status not set).
> > + */
> > +int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,
> > +                               struct dlb2_create_sched_domain_args *args,
> > +                               struct dlb2_cmd_response *resp,
> > +                               bool vdev_request,
> > +                               unsigned int vdev_id);
> > +
> > +/**
> > + * dlb2_hw_create_ldb_queue() - create a load-balanced queue
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @domain_id: domain ID.
> > + * @args: queue creation arguments.
> > + * @resp: response structure.
> > + * @vdev_request: indicates whether this request came from a vdev.
> > + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> > + *
> > + * This function creates a load-balanced queue.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> > + * assigned a detailed error code from enum dlb2_error. If successful, resp-
> >id
> > + * contains the queue ID.
> > + *
> > + * resp->id contains a virtual ID if vdev_request is true.
> > + *
> > + * Errors:
> > + * EINVAL - A requested resource is unavailable, the domain is not configured,
> > + *         the domain has already been started, or the requested queue name is
> > + *         already in use.
> > + * EFAULT - Internal error (resp->status not set).
> > + */
> > +int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,
> > +                            u32 domain_id,
> > +                            struct dlb2_create_ldb_queue_args *args,
> > +                            struct dlb2_cmd_response *resp,
> > +                            bool vdev_request,
> > +                            unsigned int vdev_id);
> > +
> > +/**
> > + * dlb2_hw_create_dir_queue() - create a directed queue
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @domain_id: domain ID.
> > + * @args: queue creation arguments.
> > + * @resp: response structure.
> > + * @vdev_request: indicates whether this request came from a vdev.
> > + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> > + *
> > + * This function creates a directed queue.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> > + * assigned a detailed error code from enum dlb2_error. If successful, resp-
> >id
> > + * contains the queue ID.
> > + *
> > + * resp->id contains a virtual ID if vdev_request is true.
> > + *
> > + * Errors:
> > + * EINVAL - A requested resource is unavailable, the domain is not configured,
> > + *         or the domain has already been started.
> > + * EFAULT - Internal error (resp->status not set).
> > + */
> > +int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,
> > +                            u32 domain_id,
> > +                            struct dlb2_create_dir_queue_args *args,
> > +                            struct dlb2_cmd_response *resp,
> > +                            bool vdev_request,
> > +                            unsigned int vdev_id);
> > +
> > +/**
> > + * dlb2_hw_create_dir_port() - create a directed port
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @domain_id: domain ID.
> > + * @args: port creation arguments.
> > + * @cq_dma_base: base address of the CQ memory. This can be a PA or an
> IOVA.
> > + * @resp: response structure.
> > + * @vdev_request: indicates whether this request came from a vdev.
> > + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> > + *
> > + * This function creates a directed port.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> > + * assigned a detailed error code from enum dlb2_error. If successful, resp-
> >id
> > + * contains the port ID.
> > + *
> > + * resp->id contains a virtual ID if vdev_request is true.
> > + *
> > + * Errors:
> > + * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
> > + *         pointer address is not properly aligned, the domain is not
> > + *         configured, or the domain has already been started.
> > + * EFAULT - Internal error (resp->status not set).
> > + */
> > +int dlb2_hw_create_dir_port(struct dlb2_hw *hw,
> > +                           u32 domain_id,
> > +                           struct dlb2_create_dir_port_args *args,
> > +                           uintptr_t cq_dma_base,
> > +                           struct dlb2_cmd_response *resp,
> > +                           bool vdev_request,
> > +                           unsigned int vdev_id);
> > +
> > +/**
> > + * dlb2_hw_create_ldb_port() - create a load-balanced port
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @domain_id: domain ID.
> > + * @args: port creation arguments.
> > + * @cq_dma_base: base address of the CQ memory. This can be a PA or an
> IOVA.
> > + * @resp: response structure.
> > + * @vdev_request: indicates whether this request came from a vdev.
> > + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> > + *
> > + * This function creates a load-balanced port.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> > + * assigned a detailed error code from enum dlb2_error. If successful, resp-
> >id
> > + * contains the port ID.
> > + *
> > + * resp->id contains a virtual ID if vdev_request is true.
> > + *
> > + * Errors:
> > + * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
> > + *         pointer address is not properly aligned, the domain is not
> > + *         configured, or the domain has already been started.
> > + * EFAULT - Internal error (resp->status not set).
> > + */
> > +int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,
> > +                           u32 domain_id,
> > +                           struct dlb2_create_ldb_port_args *args,
> > +                           uintptr_t cq_dma_base,
> > +                           struct dlb2_cmd_response *resp,
> > +                           bool vdev_request,
> > +                           unsigned int vdev_id);
> > +
> > +/**
> > + * dlb2_hw_start_domain() - start a scheduling domain
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @domain_id: domain ID.
> > + * @args: start domain arguments.
> > + * @resp: response structure.
> > + * @vdev_request: indicates whether this request came from a vdev.
> > + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> > + *
> > + * This function starts a scheduling domain, which allows applications to send
> > + * traffic through it. Once a domain is started, its resources can no longer be
> > + * configured (besides QID remapping and port enable/disable).
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> > + * assigned a detailed error code from enum dlb2_error.
> > + *
> > + * Errors:
> > + * EINVAL - the domain is not configured, or the domain is already started.
> > + */
> > +int dlb2_hw_start_domain(struct dlb2_hw *hw,
> > +                        u32 domain_id,
> > +                        struct dlb2_start_domain_args *args,
> > +                        struct dlb2_cmd_response *resp,
> > +                        bool vdev_request,
> > +                        unsigned int vdev_id);
> > +
> > +/**
> > + * dlb2_hw_map_qid() - map a load-balanced queue to a load-balanced port
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @domain_id: domain ID.
> > + * @args: map QID arguments.
> > + * @resp: response structure.
> > + * @vdev_request: indicates whether this request came from a vdev.
> > + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> > + *
> > + * This function configures the DLB to schedule QEs from the specified queue
> > + * to the specified port. Each load-balanced port can be mapped to up to 8
> > + * queues; each load-balanced queue can potentially map to all the
> > + * load-balanced ports.
> > + *
> > + * A successful return does not necessarily mean the mapping was
> configured. If
> > + * this function is unable to immediately map the queue to the port, it will
> > + * add the requested operation to a per-port list of pending map/unmap
> > + * operations, and (if it's not already running) launch a kernel thread that
> > + * periodically attempts to process all pending operations. In a sense, this is
> > + * an asynchronous function.
> > + *
> > + * This asynchronicity creates two views of the state of hardware: the actual
> > + * hardware state and the requested state (as if every request completed
> > + * immediately). If there are any pending map/unmap operations, the
> requested
> > + * state will differ from the actual state. All validation is performed with
> > + * respect to the pending state; for instance, if there are 8 pending map
> > + * operations for port X, a request for a 9th will fail because a load-balanced
> > + * port can only map up to 8 queues.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> > + * assigned a detailed error code from enum dlb2_error.
> > + *
> > + * Errors:
> > + * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
> > + *         the domain is not configured.
> > + * EFAULT - Internal error (resp->status not set).
> > + */
> > +int dlb2_hw_map_qid(struct dlb2_hw *hw,
> > +                   u32 domain_id,
> > +                   struct dlb2_map_qid_args *args,
> > +                   struct dlb2_cmd_response *resp,
> > +                   bool vdev_request,
> > +                   unsigned int vdev_id);
> > +
> > +/**
> > + * dlb2_hw_unmap_qid() - Unmap a load-balanced queue from a load-
> balanced port
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @domain_id: domain ID.
> > + * @args: unmap QID arguments.
> > + * @resp: response structure.
> > + * @vdev_request: indicates whether this request came from a vdev.
> > + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> > + *
> > + * This function configures the DLB to stop scheduling QEs from the specified
> > + * queue to the specified port.
> > + *
> > + * A successful return does not necessarily mean the mapping was removed.
> If
> > + * this function is unable to immediately unmap the queue from the port, it
> > + * will add the requested operation to a per-port list of pending map/unmap
> > + * operations, and (if it's not already running) launch a kernel thread that
> > + * periodically attempts to process all pending operations. See
> > + * dlb2_hw_map_qid() for more details.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> > + * assigned a detailed error code from enum dlb2_error.
> > + *
> > + * Errors:
> > + * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
> > + *         the domain is not configured.
> > + * EFAULT - Internal error (resp->status not set).
> > + */
> > +int dlb2_hw_unmap_qid(struct dlb2_hw *hw,
> > +                     u32 domain_id,
> > +                     struct dlb2_unmap_qid_args *args,
> > +                     struct dlb2_cmd_response *resp,
> > +                     bool vdev_request,
> > +                     unsigned int vdev_id);
> > +
> > +/**
> > + * dlb2_finish_unmap_qid_procedures() - finish any pending unmap
> procedures
> > + * @hw: dlb2_hw handle for a particular device.
> > + *
> > + * This function attempts to finish any outstanding unmap procedures.
> > + * This function should be called by the kernel thread responsible for
> > + * finishing map/unmap procedures.
> > + *
> > + * Return:
> > + * Returns the number of procedures that weren't completed.
> > + */
> > +unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw);
> > +
> > +/**
> > + * dlb2_finish_map_qid_procedures() - finish any pending map procedures
> > + * @hw: dlb2_hw handle for a particular device.
> > + *
> > + * This function attempts to finish any outstanding map procedures.
> > + * This function should be called by the kernel thread responsible for
> > + * finishing map/unmap procedures.
> > + *
> > + * Return:
> > + * Returns the number of procedures that weren't completed.
> > + */
> > +unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw);
> > +
> > +/**
> > + * dlb2_hw_enable_ldb_port() - enable a load-balanced port for scheduling
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @domain_id: domain ID.
> > + * @args: port enable arguments.
> > + * @resp: response structure.
> > + * @vdev_request: indicates whether this request came from a vdev.
> > + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> > + *
> > + * This function configures the DLB to schedule QEs to a load-balanced port.
> > + * Ports are enabled by default.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> > + * assigned a detailed error code from enum dlb2_error.
> > + *
> > + * Errors:
> > + * EINVAL - The port ID is invalid or the domain is not configured.
> > + * EFAULT - Internal error (resp->status not set).
> > + */
> > +int dlb2_hw_enable_ldb_port(struct dlb2_hw *hw,
> > +                           u32 domain_id,
> > +                           struct dlb2_enable_ldb_port_args *args,
> > +                           struct dlb2_cmd_response *resp,
> > +                           bool vdev_request,
> > +                           unsigned int vdev_id);
> > +
> > +/**
> > + * dlb2_hw_disable_ldb_port() - disable a load-balanced port for scheduling
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @domain_id: domain ID.
> > + * @args: port disable arguments.
> > + * @resp: response structure.
> > + * @vdev_request: indicates whether this request came from a vdev.
> > + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> > + *
> > + * This function configures the DLB to stop scheduling QEs to a load-balanced
> > + * port. Ports are enabled by default.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> > + * assigned a detailed error code from enum dlb2_error.
> > + *
> > + * Errors:
> > + * EINVAL - The port ID is invalid or the domain is not configured.
> > + * EFAULT - Internal error (resp->status not set).
> > + */
> > +int dlb2_hw_disable_ldb_port(struct dlb2_hw *hw,
> > +                            u32 domain_id,
> > +                            struct dlb2_disable_ldb_port_args *args,
> > +                            struct dlb2_cmd_response *resp,
> > +                            bool vdev_request,
> > +                            unsigned int vdev_id);
> > +
> > +/**
> > + * dlb2_hw_enable_dir_port() - enable a directed port for scheduling
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @domain_id: domain ID.
> > + * @args: port enable arguments.
> > + * @resp: response structure.
> > + * @vdev_request: indicates whether this request came from a vdev.
> > + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> > + *
> > + * This function configures the DLB to schedule QEs to a directed port.
> > + * Ports are enabled by default.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> > + * assigned a detailed error code from enum dlb2_error.
> > + *
> > + * Errors:
> > + * EINVAL - The port ID is invalid or the domain is not configured.
> > + * EFAULT - Internal error (resp->status not set).
> > + */
> > +int dlb2_hw_enable_dir_port(struct dlb2_hw *hw,
> > +                           u32 domain_id,
> > +                           struct dlb2_enable_dir_port_args *args,
> > +                           struct dlb2_cmd_response *resp,
> > +                           bool vdev_request,
> > +                           unsigned int vdev_id);
> > +
> > +/**
> > + * dlb2_hw_disable_dir_port() - disable a directed port for scheduling
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @domain_id: domain ID.
> > + * @args: port disable arguments.
> > + * @resp: response structure.
> > + * @vdev_request: indicates whether this request came from a vdev.
> > + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> > + *
> > + * This function configures the DLB to stop scheduling QEs to a directed port.
> > + * Ports are enabled by default.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> > + * assigned a detailed error code from enum dlb2_error.
> > + *
> > + * Errors:
> > + * EINVAL - The port ID is invalid or the domain is not configured.
> > + * EFAULT - Internal error (resp->status not set).
> > + */
> > +int dlb2_hw_disable_dir_port(struct dlb2_hw *hw,
> > +                            u32 domain_id,
> > +                            struct dlb2_disable_dir_port_args *args,
> > +                            struct dlb2_cmd_response *resp,
> > +                            bool vdev_request,
> > +                            unsigned int vdev_id);
> > +
> > +/**
> > + * dlb2_configure_ldb_cq_interrupt() - configure load-balanced CQ for
> > + *                                     interrupts
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @port_id: load-balanced port ID.
> > + * @vector: interrupt vector ID. Should be 0 for MSI or compressed MSI-X
> mode,
> > + *         else a value up to 64.
> > + * @mode: interrupt type (DLB2_CQ_ISR_MODE_MSI or
> DLB2_CQ_ISR_MODE_MSIX)
> > + * @vf: If the port is VF-owned, the VF's ID. This is used for translating the
> > + *     virtual port ID to a physical port ID. Ignored if mode is not MSI.
> > + * @owner_vf: the VF to route the interrupt to. Ignore if mode is not MSI.
> > + * @threshold: the minimum CQ depth at which the interrupt can fire. Must
> be
> > + *     greater than 0.
> > + *
> > + * This function configures the DLB registers for load-balanced CQ's
> > + * interrupts. This doesn't enable the CQ's interrupt; that can be done with
> > + * dlb2_arm_cq_interrupt() or through an interrupt arm QE.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - The port ID is invalid.
> > + */
> > +int dlb2_configure_ldb_cq_interrupt(struct dlb2_hw *hw,
> > +                                   int port_id,
> > +                                   int vector,
> > +                                   int mode,
> > +                                   unsigned int vf,
> > +                                   unsigned int owner_vf,
> > +                                   u16 threshold);
> > +
> > +/**
> > + * dlb2_configure_dir_cq_interrupt() - configure directed CQ for interrupts
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @port_id: load-balanced port ID.
> > + * @vector: interrupt vector ID. Should be 0 for MSI or compressed MSI-X
> mode,
> > + *         else a value up to 64.
> > + * @mode: interrupt type (DLB2_CQ_ISR_MODE_MSI or
> DLB2_CQ_ISR_MODE_MSIX)
> > + * @vf: If the port is VF-owned, the VF's ID. This is used for translating the
> > + *     virtual port ID to a physical port ID. Ignored if mode is not MSI.
> > + * @owner_vf: the VF to route the interrupt to. Ignore if mode is not MSI.
> > + * @threshold: the minimum CQ depth at which the interrupt can fire. Must
> be
> > + *     greater than 0.
> > + *
> > + * This function configures the DLB registers for directed CQ's interrupts.
> > + * This doesn't enable the CQ's interrupt; that can be done with
> > + * dlb2_arm_cq_interrupt() or through an interrupt arm QE.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - The port ID is invalid.
> > + */
> > +int dlb2_configure_dir_cq_interrupt(struct dlb2_hw *hw,
> > +                                   int port_id,
> > +                                   int vector,
> > +                                   int mode,
> > +                                   unsigned int vf,
> > +                                   unsigned int owner_vf,
> > +                                   u16 threshold);
> > +
> > +/**
> > + * dlb2_enable_ingress_error_alarms() - enable ingress error alarm interrupts
> > + * @hw: dlb2_hw handle for a particular device.
> > + */
> > +void dlb2_enable_ingress_error_alarms(struct dlb2_hw *hw);
> > +
> > +/**
> > + * dlb2_disable_ingress_error_alarms() - disable ingress error alarm interrupts
> > + * @hw: dlb2_hw handle for a particular device.
> > + */
> > +void dlb2_disable_ingress_error_alarms(struct dlb2_hw *hw);
> > +
> > +/**
> > + * dlb2_set_msix_mode() - enable certain hardware alarm interrupts
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @mode: MSI-X mode (DLB2_MSIX_MODE_PACKED or
> DLB2_MSIX_MODE_COMPRESSED)
> > + *
> > + * This function configures the hardware to use either packed or compressed
> > + * mode. This function should not be called if using MSI interrupts.
> > + */
> > +void dlb2_set_msix_mode(struct dlb2_hw *hw, int mode);
> > +
> > +/**
> > + * dlb2_ack_msix_interrupt() - Ack an MSI-X interrupt
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @vector: interrupt vector.
> > + *
> > + * Note: Only needed for PF service interrupts (vector 0). CQ interrupts are
> > + * acked in dlb2_ack_compressed_cq_intr().
> > + */
> > +void dlb2_ack_msix_interrupt(struct dlb2_hw *hw, int vector);
> > +
> > +/**
> > + * dlb2_arm_cq_interrupt() - arm a CQ's interrupt
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @port_id: port ID
> > + * @is_ldb: true for load-balanced port, false for a directed port
> > + * @vdev_request: indicates whether this request came from a vdev.
> > + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> > + *
> > + * This function arms the CQ's interrupt. The CQ must be configured prior to
> > + * calling this function.
> > + *
> > + * The function does no parameter validation; that is the caller's
> > + * responsibility.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return: returns 0 upon success, <0 otherwise.
> > + *
> > + * EINVAL - Invalid port ID.
> > + */
> > +int dlb2_arm_cq_interrupt(struct dlb2_hw *hw,
> > +                         int port_id,
> > +                         bool is_ldb,
> > +                         bool vdev_request,
> > +                         unsigned int vdev_id);
> > +
> > +/**
> > + * dlb2_read_compressed_cq_intr_status() - read compressed CQ interrupt
> status
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @ldb_interrupts: 2-entry array of u32 bitmaps
> > + * @dir_interrupts: 4-entry array of u32 bitmaps
> > + *
> > + * This function can be called from a compressed CQ interrupt handler to
> > + * determine which CQ interrupts have fired. The caller should take
> appropriate
> > + * (such as waking threads blocked on a CQ's interrupt) then ack the interrupts
> > + * with dlb2_ack_compressed_cq_intr().
> > + */
> > +void dlb2_read_compressed_cq_intr_status(struct dlb2_hw *hw,
> > +                                        u32 *ldb_interrupts,
> > +                                        u32 *dir_interrupts);
> > +
> > +/**
> > + * dlb2_ack_compressed_cq_intr_status() - ack compressed CQ interrupts
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @ldb_interrupts: 2-entry array of u32 bitmaps
> > + * @dir_interrupts: 4-entry array of u32 bitmaps
> > + *
> > + * This function ACKs compressed CQ interrupts. Its arguments should be the
> > + * same ones passed to dlb2_read_compressed_cq_intr_status().
> > + */
> > +void dlb2_ack_compressed_cq_intr(struct dlb2_hw *hw,
> > +                                u32 *ldb_interrupts,
> > +                                u32 *dir_interrupts);
> > +
> > +/**
> > + * dlb2_read_vf_intr_status() - read the VF interrupt status register
> > + * @hw: dlb2_hw handle for a particular device.
> > + *
> > + * This function can be called from a VF's interrupt handler to determine
> > + * which interrupts have fired. The first 31 bits correspond to CQ interrupt
> > + * vectors, and the final bit is for the PF->VF mailbox interrupt vector.
> > + *
> > + * Return:
> > + * Returns a bit vector indicating which interrupt vectors are active.
> > + */
> > +u32 dlb2_read_vf_intr_status(struct dlb2_hw *hw);
> > +
> > +/**
> > + * dlb2_ack_vf_intr_status() - ack VF interrupts
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @interrupts: 32-bit bitmap
> > + *
> > + * This function ACKs a VF's interrupts. Its interrupts argument should be the
> > + * value returned by dlb2_read_vf_intr_status().
> > + */
> > +void dlb2_ack_vf_intr_status(struct dlb2_hw *hw, u32 interrupts);
> > +
> > +/**
> > + * dlb2_ack_vf_msi_intr() - ack VF MSI interrupt
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @interrupts: 32-bit bitmap
> > + *
> > + * This function clears the VF's MSI interrupt pending register. Its interrupts
> > + * argument should be contain the MSI vectors to ACK. For example, if MSI
> MME
> > + * is in mode 0, then one bit 0 should ever be set.
> > + */
> > +void dlb2_ack_vf_msi_intr(struct dlb2_hw *hw, u32 interrupts);
> > +
> > +/**
> > + * dlb2_ack_pf_mbox_int() - ack PF->VF mailbox interrupt
> > + * @hw: dlb2_hw handle for a particular device.
> > + *
> > + * When done processing the PF mailbox request, this function unsets
> > + * the PF's mailbox ISR register.
> > + */
> > +void dlb2_ack_pf_mbox_int(struct dlb2_hw *hw);
> > +
> > +/**
> > + * dlb2_read_vdev_to_pf_int_bitvec() - return a bit vector of all requesting
> > + *                                     vdevs
> > + * @hw: dlb2_hw handle for a particular device.
> > + *
> > + * When the vdev->PF ISR fires, this function can be called to determine which
> > + * vdev(s) are requesting service. This bitvector must be passed to
> > + * dlb2_ack_vdev_to_pf_int() when processing is complete for all requesting
> > + * vdevs.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns a bit vector indicating which VFs (0-15) have requested service.
> > + */
> > +u32 dlb2_read_vdev_to_pf_int_bitvec(struct dlb2_hw *hw);
> > +
> > +/**
> > + * dlb2_ack_vdev_mbox_int() - ack processed vdev->PF mailbox interrupt
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @bitvec: bit vector returned by dlb2_read_vdev_to_pf_int_bitvec()
> > + *
> > + * When done processing all VF mailbox requests, this function unsets the VF's
> > + * mailbox ISR register.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + */
> > +void dlb2_ack_vdev_mbox_int(struct dlb2_hw *hw, u32 bitvec);
> > +
> > +/**
> > + * dlb2_read_vf_flr_int_bitvec() - return a bit vector of all VFs requesting
> > + *                                 FLR
> > + * @hw: dlb2_hw handle for a particular device.
> > + *
> > + * When the VF FLR ISR fires, this function can be called to determine which
> > + * VF(s) are requesting FLRs. This bitvector must passed to
> > + * dlb2_ack_vf_flr_int() when processing is complete for all requesting VFs.
> > + *
> > + * Return:
> > + * Returns a bit vector indicating which VFs (0-15) have requested FLRs.
> > + */
> > +u32 dlb2_read_vf_flr_int_bitvec(struct dlb2_hw *hw);
> > +
> > +/**
> > + * dlb2_ack_vf_flr_int() - ack processed VF<->PF interrupt(s)
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @bitvec: bit vector returned by dlb2_read_vf_flr_int_bitvec()
> > + *
> > + * When done processing all VF FLR requests, this function unsets the VF's FLR
> > + * ISR register.
> > + */
> > +void dlb2_ack_vf_flr_int(struct dlb2_hw *hw, u32 bitvec);
> > +
> > +/**
> > + * dlb2_ack_vdev_to_pf_int() - ack processed VF mbox and FLR interrupt(s)
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @mbox_bitvec: bit vector returned by dlb2_read_vdev_to_pf_int_bitvec()
> > + * @flr_bitvec: bit vector returned by dlb2_read_vf_flr_int_bitvec()
> > + *
> > + * When done processing all VF requests, this function communicates to the
> > + * hardware that processing is complete.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + */
> > +void dlb2_ack_vdev_to_pf_int(struct dlb2_hw *hw,
> > +                            u32 mbox_bitvec,
> > +                            u32 flr_bitvec);
> > +
> > +/**
> > + * dlb2_process_wdt_interrupt() - process watchdog timer interrupts
> > + * @hw: dlb2_hw handle for a particular device.
> > + *
> > + * This function reads the watchdog timer interrupt cause registers to
> > + * determine which port(s) had a watchdog timeout, and notifies the
> > + * application(s) that own the port(s).
> > + */
> > +void dlb2_process_wdt_interrupt(struct dlb2_hw *hw);
> > +
> > +/**
> > + * dlb2_process_alarm_interrupt() - process an alarm interrupt
> > + * @hw: dlb2_hw handle for a particular device.
> > + *
> > + * This function reads and logs the alarm syndrome, then acks the interrupt.
> > + * This function should be called from the alarm interrupt handler when
> > + * interrupt vector DLB2_INT_ALARM fires.
> > + */
> > +void dlb2_process_alarm_interrupt(struct dlb2_hw *hw);
> > +
> > +/**
> > + * dlb2_process_ingress_error_interrupt() - process ingress error interrupts
> > + * @hw: dlb2_hw handle for a particular device.
> > + *
> > + * This function reads the alarm syndrome, logs it, notifies user-space, and
> > + * acks the interrupt. This function should be called from the alarm interrupt
> > + * handler when interrupt vector DLB2_INT_INGRESS_ERROR fires.
> > + *
> > + * Return:
> > + * Returns true if an ingress error interrupt occurred, false otherwise
> > + */
> > +bool dlb2_process_ingress_error_interrupt(struct dlb2_hw *hw);
> > +
> > +/**
> > + * dlb2_get_group_sequence_numbers() - return a group's number of SNs per
> queue
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @group_id: sequence number group ID.
> > + *
> > + * This function returns the configured number of sequence numbers per
> queue
> > + * for the specified group.
> > + *
> > + * Return:
> > + * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
> > + */
> > +int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw,
> > +                                   unsigned int group_id);
> > +
> > +/**
> > + * dlb2_get_group_sequence_number_occupancy() - return a group's in-use
> slots
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @group_id: sequence number group ID.
> > + *
> > + * This function returns the group's number of in-use slots (i.e. load-balanced
> > + * queues using the specified group).
> > + *
> > + * Return:
> > + * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
> > + */
> > +int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw,
> > +                                            unsigned int group_id);
> > +
> > +/**
> > + * dlb2_set_group_sequence_numbers() - assign a group's number of SNs per
> queue
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @group_id: sequence number group ID.
> > + * @val: requested amount of sequence numbers per queue.
> > + *
> > + * This function configures the group's number of sequence numbers per
> queue.
> > + * val can be a power-of-two between 32 and 1024, inclusive. This setting can
> > + * be configured until the first ordered load-balanced queue is configured, at
> > + * which point the configuration is locked.
> > + *
> > + * Return:
> > + * Returns 0 upon success; -EINVAL if group_id or val is invalid, -EPERM if an
> > + * ordered queue is configured.
> > + */
> > +int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,
> > +                                   unsigned int group_id,
> > +                                   unsigned long val);
> > +
> > +/**
> > + * dlb2_reset_domain() - reset a scheduling domain
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @domain_id: domain ID.
> > + * @vdev_request: indicates whether this request came from a vdev.
> > + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> > + *
> > + * This function resets and frees a DLB 2.0 scheduling domain and its
> associated
> > + * resources.
> > + *
> > + * Pre-condition: the driver must ensure software has stopped sending QEs
> > + * through this domain's producer ports before invoking this function, or
> > + * undefined behavior will result.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, -1 otherwise.
> > + *
> > + * EINVAL - Invalid domain ID, or the domain is not configured.
> > + * EFAULT - Internal error. (Possibly caused if software is the pre-condition
> > + *         is not met.)
> > + * ETIMEDOUT - Hardware component didn't reset in the expected time.
> > + */
> > +int dlb2_reset_domain(struct dlb2_hw *hw,
> > +                     u32 domain_id,
> > +                     bool vdev_request,
> > +                     unsigned int vdev_id);
> > +
> > +/**
> > + * dlb2_ldb_port_owned_by_domain() - query whether a port is owned by a
> domain
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @domain_id: domain ID.
> > + * @port_id: indicates whether this request came from a VF.
> > + * @vdev_request: indicates whether this request came from a vdev.
> > + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> > + *
> > + * This function returns whether a load-balanced port is owned by a specified
> > + * domain.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 if false, 1 if true, <0 otherwise.
> > + *
> > + * EINVAL - Invalid domain or port ID, or the domain is not configured.
> > + */
> > +int dlb2_ldb_port_owned_by_domain(struct dlb2_hw *hw,
> > +                                 u32 domain_id,
> > +                                 u32 port_id,
> > +                                 bool vdev_request,
> > +                                 unsigned int vdev_id);
> > +
> > +/**
> > + * dlb2_dir_port_owned_by_domain() - query whether a port is owned by a
> domain
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @domain_id: domain ID.
> > + * @port_id: indicates whether this request came from a VF.
> > + * @vdev_request: indicates whether this request came from a vdev.
> > + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> > + *
> > + * This function returns whether a directed port is owned by a specified
> > + * domain.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 if false, 1 if true, <0 otherwise.
> > + *
> > + * EINVAL - Invalid domain or port ID, or the domain is not configured.
> > + */
> > +int dlb2_dir_port_owned_by_domain(struct dlb2_hw *hw,
> > +                                 u32 domain_id,
> > +                                 u32 port_id,
> > +                                 bool vdev_request,
> > +                                 unsigned int vdev_id);
> > +
> > +/**
> > + * dlb2_hw_get_num_resources() - query the PCI function's available
> resources
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @arg: pointer to resource counts.
> > + * @vdev_request: indicates whether this request came from a vdev.
> > + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> > + *
> > + * This function returns the number of available resources for the PF or for a
> > + * VF.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, -EINVAL if vdev_request is true and vdev_id is
> > + * invalid.
> > + */
> > +int dlb2_hw_get_num_resources(struct dlb2_hw *hw,
> > +                             struct dlb2_get_num_resources_args *arg,
> > +                             bool vdev_request,
> > +                             unsigned int vdev_id);
> > +
> > +/**
> > + * dlb2_hw_get_num_used_resources() - query the PCI function's used
> resources
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @arg: pointer to resource counts.
> > + * @vdev_request: indicates whether this request came from a vdev.
> > + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> > + *
> > + * This function returns the number of resources in use by the PF or a VF. It
> > + * fills in the fields that args points to, except the following:
> > + * - max_contiguous_atomic_inflights
> > + * - max_contiguous_hist_list_entries
> > + * - max_contiguous_ldb_credits
> > + * - max_contiguous_dir_credits
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, -EINVAL if vdev_request is true and vdev_id is
> > + * invalid.
> > + */
> > +int dlb2_hw_get_num_used_resources(struct dlb2_hw *hw,
> > +                                  struct dlb2_get_num_resources_args *arg,
> > +                                  bool vdev_request,
> > +                                  unsigned int vdev_id);
> > +
> > +/**
> > + * dlb2_send_async_vdev_to_pf_msg() - (vdev only) send a mailbox message
> to
> > + *                                    the PF
> > + * @hw: dlb2_hw handle for a particular device.
> > + *
> > + * This function sends a VF->PF mailbox message. It is asynchronous, so it
> > + * returns once the message is sent but potentially before the PF has
> processed
> > + * the message. The caller must call dlb2_vdev_to_pf_complete() to
> determine
> > + * when the PF has finished processing the request.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + */
> > +void dlb2_send_async_vdev_to_pf_msg(struct dlb2_hw *hw);
> > +
> > +/**
> > + * dlb2_vdev_to_pf_complete() - check the status of an asynchronous
> mailbox
> > + *                              request
> > + * @hw: dlb2_hw handle for a particular device.
> > + *
> > + * This function returns a boolean indicating whether the PF has finished
> > + * processing a VF->PF mailbox request. It should only be called after sending
> > + * an asynchronous request with dlb2_send_async_vdev_to_pf_msg().
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + */
> > +bool dlb2_vdev_to_pf_complete(struct dlb2_hw *hw);
> > +
> > +/**
> > + * dlb2_vf_flr_complete() - check the status of a VF FLR
> > + * @hw: dlb2_hw handle for a particular device.
> > + *
> > + * This function returns a boolean indicating whether the PF has finished
> > + * executing the VF FLR. It should only be called after setting the VF's FLR
> > + * bit.
> > + */
> > +bool dlb2_vf_flr_complete(struct dlb2_hw *hw);
> > +
> > +/**
> > + * dlb2_send_async_pf_to_vdev_msg() - (PF only) send a mailbox message to
> a
> > + *                                     vdev
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @vdev_id: vdev ID.
> > + *
> > + * This function sends a PF->vdev mailbox message. It is asynchronous, so it
> > + * returns once the message is sent but potentially before the vdev has
> > + * processed the message. The caller must call dlb2_pf_to_vdev_complete()
> to
> > + * determine when the vdev has finished processing the request.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + */
> > +void dlb2_send_async_pf_to_vdev_msg(struct dlb2_hw *hw, unsigned int
> vdev_id);
> > +
> > +/**
> > + * dlb2_pf_to_vdev_complete() - check the status of an asynchronous
> mailbox
> > + *                            request
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @vdev_id: vdev ID.
> > + *
> > + * This function returns a boolean indicating whether the vdev has finished
> > + * processing a PF->vdev mailbox request. It should only be called after
> > + * sending an asynchronous request with
> dlb2_send_async_pf_to_vdev_msg().
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + */
> > +bool dlb2_pf_to_vdev_complete(struct dlb2_hw *hw, unsigned int vdev_id);
> > +
> > +/**
> > + * dlb2_pf_read_vf_mbox_req() - (PF only) read a VF->PF mailbox request
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @vf_id: VF ID.
> > + * @data: pointer to message data.
> > + * @len: size, in bytes, of the data array.
> > + *
> > + * This function copies one of the PF's VF->PF mailboxes into the array
> pointed
> > + * to by data.
> > + *
> > + * Return:
> > + * Returns 0 upon success, <0 otherwise.
> > + *
> > + * EINVAL - len >= DLB2_VF2PF_REQ_BYTES.
> > + */
> > +int dlb2_pf_read_vf_mbox_req(struct dlb2_hw *hw,
> > +                            unsigned int vf_id,
> > +                            void *data,
> > +                            int len);
> > +
> > +/**
> > + * dlb2_pf_read_vf_mbox_resp() - (PF only) read a VF->PF mailbox response
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @vf_id: VF ID.
> > + * @data: pointer to message data.
> > + * @len: size, in bytes, of the data array.
> > + *
> > + * This function copies one of the PF's VF->PF mailboxes into the array
> pointed
> > + * to by data.
> > + *
> > + * Return:
> > + * Returns 0 upon success, <0 otherwise.
> > + *
> > + * EINVAL - len >= DLB2_VF2PF_RESP_BYTES.
> > + */
> > +int dlb2_pf_read_vf_mbox_resp(struct dlb2_hw *hw,
> > +                             unsigned int vf_id,
> > +                             void *data,
> > +                             int len);
> > +
> > +/**
> > + * dlb2_pf_write_vf_mbox_resp() - (PF only) write a PF->VF mailbox response
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @vf_id: VF ID.
> > + * @data: pointer to message data.
> > + * @len: size, in bytes, of the data array.
> > + *
> > + * This function copies the user-provided message data into of the PF's VF-
> >PF
> > + * mailboxes.
> > + *
> > + * Return:
> > + * Returns 0 upon success, <0 otherwise.
> > + *
> > + * EINVAL - len >= DLB2_PF2VF_RESP_BYTES.
> > + */
> > +int dlb2_pf_write_vf_mbox_resp(struct dlb2_hw *hw,
> > +                              unsigned int vf_id,
> > +                              void *data,
> > +                              int len);
> > +
> > +/**
> > + * dlb2_pf_write_vf_mbox_req() - (PF only) write a PF->VF mailbox request
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @vf_id: VF ID.
> > + * @data: pointer to message data.
> > + * @len: size, in bytes, of the data array.
> > + *
> > + * This function copies the user-provided message data into of the PF's VF-
> >PF
> > + * mailboxes.
> > + *
> > + * Return:
> > + * Returns 0 upon success, <0 otherwise.
> > + *
> > + * EINVAL - len >= DLB2_PF2VF_REQ_BYTES.
> > + */
> > +int dlb2_pf_write_vf_mbox_req(struct dlb2_hw *hw,
> > +                             unsigned int vf_id,
> > +                             void *data,
> > +                             int len);
> > +
> > +/**
> > + * dlb2_vf_read_pf_mbox_resp() - (VF only) read a PF->VF mailbox response
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @data: pointer to message data.
> > + * @len: size, in bytes, of the data array.
> > + *
> > + * This function copies the VF's PF->VF mailbox into the array pointed to by
> > + * data.
> > + *
> > + * Return:
> > + * Returns 0 upon success, <0 otherwise.
> > + *
> > + * EINVAL - len >= DLB2_PF2VF_RESP_BYTES.
> > + */
> > +int dlb2_vf_read_pf_mbox_resp(struct dlb2_hw *hw, void *data, int len);
> > +
> > +/**
> > + * dlb2_vf_read_pf_mbox_req() - (VF only) read a PF->VF mailbox request
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @data: pointer to message data.
> > + * @len: size, in bytes, of the data array.
> > + *
> > + * This function copies the VF's PF->VF mailbox into the array pointed to by
> > + * data.
> > + *
> > + * Return:
> > + * Returns 0 upon success, <0 otherwise.
> > + *
> > + * EINVAL - len >= DLB2_PF2VF_REQ_BYTES.
> > + */
> > +int dlb2_vf_read_pf_mbox_req(struct dlb2_hw *hw, void *data, int len);
> > +
> > +/**
> > + * dlb2_vf_write_pf_mbox_req() - (VF only) write a VF->PF mailbox request
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @data: pointer to message data.
> > + * @len: size, in bytes, of the data array.
> > + *
> > + * This function copies the user-provided message data into of the VF's PF-
> >VF
> > + * mailboxes.
> > + *
> > + * Return:
> > + * Returns 0 upon success, <0 otherwise.
> > + *
> > + * EINVAL - len >= DLB2_VF2PF_REQ_BYTES.
> > + */
> > +int dlb2_vf_write_pf_mbox_req(struct dlb2_hw *hw, void *data, int len);
> > +
> > +/**
> > + * dlb2_vf_write_pf_mbox_resp() - (VF only) write a VF->PF mailbox response
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @data: pointer to message data.
> > + * @len: size, in bytes, of the data array.
> > + *
> > + * This function copies the user-provided message data into of the VF's PF-
> >VF
> > + * mailboxes.
> > + *
> > + * Return:
> > + * Returns 0 upon success, <0 otherwise.
> > + *
> > + * EINVAL - len >= DLB2_VF2PF_RESP_BYTES.
> > + */
> > +int dlb2_vf_write_pf_mbox_resp(struct dlb2_hw *hw, void *data, int len);
> > +
> > +/**
> > + * dlb2_reset_vdev() - reset the hardware owned by a virtual device
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @id: virtual device ID
> > + *
> > + * This function resets the hardware owned by a vdev, by resetting the vdev's
> > + * domains one by one.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + */
> > +int dlb2_reset_vdev(struct dlb2_hw *hw, unsigned int id);
> > +
> > +/**
> > + * dlb2_vdev_is_locked() - check whether the vdev's resources are locked
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @id: virtual device ID
> > + *
> > + * This function returns whether or not the vdev's resource assignments are
> > + * locked. If locked, no resources can be added to or subtracted from the
> > + * group.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + */
> > +bool dlb2_vdev_is_locked(struct dlb2_hw *hw, unsigned int id);
> > +
> > +/**
> > + * dlb2_lock_vdev() - lock the vdev's resources
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @id: virtual device ID
> > + *
> > + * This function sets a flag indicating that the vdev is using its resources.
> > + * When the vdev is locked, its resource assignment cannot be changed.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + */
> > +void dlb2_lock_vdev(struct dlb2_hw *hw, unsigned int id);
> > +
> > +/**
> > + * dlb2_unlock_vdev() - unlock the vdev's resources
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @id: virtual device ID
> > + *
> > + * This function unlocks the vdev's resource assignment, allowing it to be
> > + * modified.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + */
> > +void dlb2_unlock_vdev(struct dlb2_hw *hw, unsigned int id);
> > +
> > +/**
> > + * dlb2_update_vdev_sched_domains() - update the domains assigned to a
> vdev
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @id: virtual device ID
> > + * @num: number of scheduling domains to assign to this vdev
> > + *
> > + * This function assigns num scheduling domains to the specified vdev. If the
> > + * vdev already has domains assigned, this existing assignment is adjusted
> > + * accordingly.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, <0 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - id is invalid, or the requested number of resources are
> > + *         unavailable.
> > + * EPERM  - The vdev's resource assignment is locked and cannot be changed.
> > + */
> > +int dlb2_update_vdev_sched_domains(struct dlb2_hw *hw, u32 id, u32 num);
> > +
> > +/**
> > + * dlb2_update_vdev_ldb_queues() - update the LDB queues assigned to a
> vdev
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @id: virtual device ID
> > + * @num: number of LDB queues to assign to this vdev
> > + *
> > + * This function assigns num LDB queues to the specified vdev. If the vdev
> > + * already has LDB queues assigned, this existing assignment is adjusted
> > + * accordingly.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, <0 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - id is invalid, or the requested number of resources are
> > + *         unavailable.
> > + * EPERM  - The vdev's resource assignment is locked and cannot be changed.
> > + */
> > +int dlb2_update_vdev_ldb_queues(struct dlb2_hw *hw, u32 id, u32 num);
> > +
> > +/**
> > + * dlb2_update_vdev_ldb_ports() - update the LDB ports assigned to a vdev
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @id: virtual device ID
> > + * @num: number of LDB ports to assign to this vdev
> > + *
> > + * This function assigns num LDB ports to the specified vdev. If the vdev
> > + * already has LDB ports assigned, this existing assignment is adjusted
> > + * accordingly.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, <0 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - id is invalid, or the requested number of resources are
> > + *         unavailable.
> > + * EPERM  - The vdev's resource assignment is locked and cannot be changed.
> > + */
> > +int dlb2_update_vdev_ldb_ports(struct dlb2_hw *hw, u32 id, u32 num);
> > +
> > +/**
> > + * dlb2_update_vdev_ldb_cos_ports() - update the LDB ports assigned to a
> vdev
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @id: virtual device ID
> > + * @cos: class-of-service ID
> > + * @num: number of LDB ports to assign to this vdev
> > + *
> > + * This function assigns num LDB ports from class-of-service cos to the
> > + * specified vdev. If the vdev already has LDB ports from this class-of-service
> > + * assigned, this existing assignment is adjusted accordingly.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, <0 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - id is invalid, or the requested number of resources are
> > + *         unavailable.
> > + * EPERM  - The vdev's resource assignment is locked and cannot be changed.
> > + */
> > +int dlb2_update_vdev_ldb_cos_ports(struct dlb2_hw *hw,
> > +                                  u32 id,
> > +                                  u32 cos,
> > +                                  u32 num);
> > +
> > +/**
> > + * dlb2_update_vdev_dir_ports() - update the DIR ports assigned to a vdev
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @id: virtual device ID
> > + * @num: number of DIR ports to assign to this vdev
> > + *
> > + * This function assigns num DIR ports to the specified vdev. If the vdev
> > + * already has DIR ports assigned, this existing assignment is adjusted
> > + * accordingly.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, <0 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - id is invalid, or the requested number of resources are
> > + *         unavailable.
> > + * EPERM  - The vdev's resource assignment is locked and cannot be changed.
> > + */
> > +int dlb2_update_vdev_dir_ports(struct dlb2_hw *hw, u32 id, u32 num);
> > +
> > +/**
> > + * dlb2_update_vdev_ldb_credits() - update the vdev's assigned LDB credits
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @id: virtual device ID
> > + * @num: number of LDB credit credits to assign to this vdev
> > + *
> > + * This function assigns num LDB credit to the specified vdev. If the vdev
> > + * already has LDB credits assigned, this existing assignment is adjusted
> > + * accordingly. vdevs are assigned a contiguous chunk of credits, so this
> > + * function may fail if a sufficiently large contiguous chunk is not available.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, <0 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - id is invalid, or the requested number of resources are
> > + *         unavailable.
> > + * EPERM  - The vdev's resource assignment is locked and cannot be changed.
> > + */
> > +int dlb2_update_vdev_ldb_credits(struct dlb2_hw *hw, u32 id, u32 num);
> > +
> > +/**
> > + * dlb2_update_vdev_dir_credits() - update the vdev's assigned DIR credits
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @id: virtual device ID
> > + * @num: number of DIR credits to assign to this vdev
> > + *
> > + * This function assigns num DIR credit to the specified vdev. If the vdev
> > + * already has DIR credits assigned, this existing assignment is adjusted
> > + * accordingly. vdevs are assigned a contiguous chunk of credits, so this
> > + * function may fail if a sufficiently large contiguous chunk is not available.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, <0 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - id is invalid, or the requested number of resources are
> > + *         unavailable.
> > + * EPERM  - The vdev's resource assignment is locked and cannot be changed.
> > + */
> > +int dlb2_update_vdev_dir_credits(struct dlb2_hw *hw, u32 id, u32 num);
> > +
> > +/**
> > + * dlb2_update_vdev_hist_list_entries() - update the vdev's assigned HL
> entries
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @id: virtual device ID
> > + * @num: number of history list entries to assign to this vdev
> > + *
> > + * This function assigns num history list entries to the specified vdev. If the
> > + * vdev already has history list entries assigned, this existing assignment is
> > + * adjusted accordingly. vdevs are assigned a contiguous chunk of entries, so
> > + * this function may fail if a sufficiently large contiguous chunk is not
> > + * available.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, <0 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - id is invalid, or the requested number of resources are
> > + *         unavailable.
> > + * EPERM  - The vdev's resource assignment is locked and cannot be changed.
> > + */
> > +int dlb2_update_vdev_hist_list_entries(struct dlb2_hw *hw, u32 id, u32 num);
> > +
> > +/**
> > + * dlb2_update_vdev_atomic_inflights() - update the vdev's atomic inflights
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @id: virtual device ID
> > + * @num: number of atomic inflights to assign to this vdev
> > + *
> > + * This function assigns num atomic inflights to the specified vdev. If the vdev
> > + * already has atomic inflights assigned, this existing assignment is adjusted
> > + * accordingly. vdevs are assigned a contiguous chunk of entries, so this
> > + * function may fail if a sufficiently large contiguous chunk is not available.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, <0 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - id is invalid, or the requested number of resources are
> > + *         unavailable.
> > + * EPERM  - The vdev's resource assignment is locked and cannot be changed.
> > + */
> > +int dlb2_update_vdev_atomic_inflights(struct dlb2_hw *hw, u32 id, u32
> num);
> > +
> > +/**
> > + * dlb2_reset_vdev_resources() - reassign the vdev's resources to the PF
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @id: virtual device ID
> > + *
> > + * This function takes any resources currently assigned to the vdev and
> > + * reassigns them to the PF.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, <0 otherwise.
> > + *
> > + * Errors:
> > + * EINVAL - id is invalid
> > + * EPERM  - The vdev's resource assignment is locked and cannot be changed.
> > + */
> > +int dlb2_reset_vdev_resources(struct dlb2_hw *hw, unsigned int id);
> > +
> > +/**
> > + * dlb2_notify_vf() - send an alarm to a VF
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @vf_id: VF ID
> > + * @notification: notification
> > + *
> > + * This function sends a notification (as defined in dlb2_mbox.h) to a VF.
> > + *
> > + * Return:
> > + * Returns 0 upon success, <0 if the VF doesn't ACK the PF->VF interrupt.
> > + */
> > +int dlb2_notify_vf(struct dlb2_hw *hw,
> > +                  unsigned int vf_id,
> > +                  u32 notification);
> > +
> > +/**
> > + * dlb2_vdev_in_use() - query whether a virtual device is in use
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @id: virtual device ID
> > + *
> > + * This function sends a mailbox request to the vdev to query whether the
> vdev
> > + * is in use.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 for false, 1 for true, and <0 if the mailbox request times out or
> > + * an internal error occurs.
> > + */
> > +int dlb2_vdev_in_use(struct dlb2_hw *hw, unsigned int id);
> > +
> > +/**
> > + * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic
> > + * @hw: dlb2_hw handle for a particular device.
> > + *
> > + * Clearing the PMCSR must be done at initialization to make the device fully
> > + * operational.
> > + */
> > +void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw);
> > +
> > +/**
> > + * dlb2_hw_get_ldb_queue_depth() - returns the depth of a load-balanced
> queue
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @domain_id: domain ID.
> > + * @args: queue depth args
> > + * @resp: response structure.
> > + * @vdev_request: indicates whether this request came from a vdev.
> > + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> > + *
> > + * This function returns the depth of a load-balanced queue.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> > + * assigned a detailed error code from enum dlb2_error. If successful, resp-
> >id
> > + * contains the depth.
> > + *
> > + * Errors:
> > + * EINVAL - Invalid domain ID or queue ID.
> > + */
> > +int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,
> > +                               u32 domain_id,
> > +                               struct dlb2_get_ldb_queue_depth_args *args,
> > +                               struct dlb2_cmd_response *resp,
> > +                               bool vdev_request,
> > +                               unsigned int vdev_id);
> > +
> > +/**
> > + * dlb2_hw_get_dir_queue_depth() - returns the depth of a directed queue
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @domain_id: domain ID.
> > + * @args: queue depth args
> > + * @resp: response structure.
> > + * @vdev_request: indicates whether this request came from a vdev.
> > + * @vdev_id: If vdev_request is true, this contains the vdev's ID.
> > + *
> > + * This function returns the depth of a directed queue.
> > + *
> > + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual
> > + * device.
> > + *
> > + * Return:
> > + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
> > + * assigned a detailed error code from enum dlb2_error. If successful, resp-
> >id
> > + * contains the depth.
> > + *
> > + * Errors:
> > + * EINVAL - Invalid domain ID or queue ID.
> > + */
> > +int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,
> > +                               u32 domain_id,
> > +                               struct dlb2_get_dir_queue_depth_args *args,
> > +                               struct dlb2_cmd_response *resp,
> > +                               bool vdev_request,
> > +                               unsigned int vdev_id);
> > +
> > +enum dlb2_virt_mode {
> > +       DLB2_VIRT_NONE,
> > +       DLB2_VIRT_SRIOV,
> > +       DLB2_VIRT_SIOV,
> > +
> > +       /* NUM_DLB2_VIRT_MODES must be last */
> > +       NUM_DLB2_VIRT_MODES,
> > +};
> > +
> > +/**
> > + * dlb2_hw_set_virt_mode() - set the device's virtualization mode
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @mode: either none, SR-IOV, or Scalable IOV.
> > + *
> > + * This function sets the virtualization mode of the device. This controls
> > + * whether the device uses a software or hardware mailbox.
> > + *
> > + * This should be called by the PF driver when either SR-IOV or Scalable IOV is
> > + * selected as the virtualization mechanism, and by the VF/VDEV driver during
> > + * initialization after recognizing itself as an SR-IOV or Scalable IOV device.
> > + *
> > + * Errors:
> > + * EINVAL - Invalid mode.
> > + */
> > +int dlb2_hw_set_virt_mode(struct dlb2_hw *hw, enum dlb2_virt_mode
> mode);
> > +
> > +/**
> > + * dlb2_hw_get_virt_mode() - get the device's virtualization mode
> > + * @hw: dlb2_hw handle for a particular device.
> > + *
> > + * This function gets the virtualization mode of the device.
> > + */
> > +enum dlb2_virt_mode dlb2_hw_get_virt_mode(struct dlb2_hw *hw);
> > +
> > +/**
> > + * dlb2_hw_get_ldb_port_phys_id() - get a physical port ID from its virt ID
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @id: virtual port ID.
> > + * @vdev_id: vdev ID.
> > + *
> > + * Return:
> > + * Returns >= 0 upon success, -1 otherwise.
> > + */
> > +s32 dlb2_hw_get_ldb_port_phys_id(struct dlb2_hw *hw,
> > +                                u32 id,
> > +                                unsigned int vdev_id);
> > +
> > +/**
> > + * dlb2_hw_get_dir_port_phys_id() - get a physical port ID from its virt ID
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @id: virtual port ID.
> > + * @vdev_id: vdev ID.
> > + *
> > + * Return:
> > + * Returns >= 0 upon success, -1 otherwise.
> > + */
> > +s32 dlb2_hw_get_dir_port_phys_id(struct dlb2_hw *hw,
> > +                                u32 id,
> > +                                unsigned int vdev_id);
> > +
> > +/**
> > + * dlb2_hw_register_sw_mbox() - register a software mailbox
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @vdev_id: vdev ID.
> > + * @vdev2pf_mbox: pointer to a 4KB memory page used for vdev->PF
> communication.
> > + * @pf2vdev_mbox: pointer to a 4KB memory page used for PF->vdev
> communication.
> > + * @pf2vdev_inject: callback function for injecting a PF->vdev interrupt.
> > + * @inject_arg: user argument for pf2vdev_inject callback.
> > + *
> > + * When Scalable IOV is enabled, the VDCM must register a software mailbox
> for
> > + * every virtual device during vdev creation.
> > + *
> > + * This function notifies the driver to use a software mailbox using the
> > + * provided pointers, instead of the device's hardware mailbox. When the
> driver
> > + * calls mailbox functions like dlb2_pf_write_vf_mbox_req(), the request will
> > + * go to the software mailbox instead of the hardware one. This is used in
> > + * Scalable IOV virtualization.
> > + */
> > +void dlb2_hw_register_sw_mbox(struct dlb2_hw *hw,
> > +                             unsigned int vdev_id,
> > +                             u32 *vdev2pf_mbox,
> > +                             u32 *pf2vdev_mbox,
> > +                             void (*pf2vdev_inject)(void *),
> > +                             void *inject_arg);
> > +
> > +/**
> > + * dlb2_hw_unregister_sw_mbox() - unregister a software mailbox
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @vdev_id: vdev ID.
> > + *
> > + * This function notifies the driver to stop using a previously registered
> > + * software mailbox.
> > + */
> > +void dlb2_hw_unregister_sw_mbox(struct dlb2_hw *hw, unsigned int
> vdev_id);
> > +
> > +/**
> > + * dlb2_hw_setup_cq_ims_entry() - setup a CQ's IMS entry
> > + * @hw: dlb2_hw handle for a particular device.
> > + * @vdev_id: vdev ID.
> > + * @virt_cq_id: virtual CQ ID.
> > + * @is_ldb: CQ is load-balanced.
> > + * @addr_lo: least-significant 32 bits of address.
> > + * @data: 32 data bits.
> > + *
> > + * This sets up the CQ's IMS entry with the provided address and data values.
> > + * This function should only be called if the device is configured for Scalable
> > + * IOV virtualization. The upper 32 address bits are fixed in hardware and thus
> > + * not needed.
> > + */
> > +void dlb2_hw_setup_cq_ims_entry(struct dlb2_hw *hw,