mbox series

[RFC,00/29] cover letter for net/qdma PMD

Message ID 20220706075219.517046-1-aman.kumar@vvdntech.in (mailing list archive)
Headers
Series cover letter for net/qdma PMD |

Message

Aman Kumar July 6, 2022, 7:51 a.m. UTC
  This patch series provides net PMD for VVDN's NR(5G) Hi-PHY solution over
T1 telco card. These telco accelerator NIC cards are targeted for ORAN DU
systems to offload inline NR Hi-PHY (split 7.2) operations. For the DU host,
the cards typically appears as a basic NIC device. The device is based on
AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline Hi-PHY IP
is developed by VVDN Technologies Private Limited.
PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281

Hardware-specs:
https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf

- This series is an RFC and target for DPDK v22.11.
- Currently, the PMD is supported only for x86_64 host.
- Build machine used: Fedora 36 with gcc 12.1.1
- The device communicates to host over AMD/Xilinx's QDMA subsystem for
  PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma
- The QDMA access library is part of PMD [PATCH 06]
- The DPDK version of documents (doc/guide/nics/*) for this device is
  WIP and will be included in the next version of patchset.

Aman Kumar (29):
  net/qdma: add net PMD template
  maintainers: add maintainer for net/qdma PMD
  net/meson.build: add support to compile net qdma
  net/qdma: add logging support
  net/qdma: add device init and uninit functions
  net/qdma: add qdma access library
  net/qdma: add supported qdma version
  net/qdma: qdma hardware initialization
  net/qdma: define device modes and data structure
  net/qdma: add net PMD ops template
  net/qdma: add configure close and reset ethdev ops
  net/qdma: add routine for Rx queue initialization
  net/qdma: add callback support for Rx queue count
  net/qdma: add routine for Tx queue initialization
  net/qdma: add queue cleanup PMD ops
  net/qdma: add start and stop apis
  net/qdma: add Tx burst API
  net/qdma: add Tx queue reclaim routine
  net/qdma: add callback function for Tx desc status
  net/qdma: add Rx burst API
  net/qdma: add mailbox communication library
  net/qdma: mbox API adaptation in Rx/Tx init
  net/qdma: add support for VF interfaces
  net/qdma: add Rx/Tx queue setup routine for VF devices
  net/qdma: add basic PMD ops for VF
  net/qdma: add datapath burst API for VF
  net/qdma: add device specific APIs for export
  net/qdma: add additional debug APIs
  net/qdma: add stats PMD ops for PF and VF

 MAINTAINERS                                   |    4 +
 drivers/net/meson.build                       |    1 +
 drivers/net/qdma/meson.build                  |   44 +
 drivers/net/qdma/qdma.h                       |  354 +
 .../eqdma_soft_access/eqdma_soft_access.c     | 5832 ++++++++++++
 .../eqdma_soft_access/eqdma_soft_access.h     |  294 +
 .../eqdma_soft_access/eqdma_soft_reg.h        | 1211 +++
 .../eqdma_soft_access/eqdma_soft_reg_dump.c   | 3908 ++++++++
 .../net/qdma/qdma_access/qdma_access_common.c | 1271 +++
 .../net/qdma/qdma_access/qdma_access_common.h |  888 ++
 .../net/qdma/qdma_access/qdma_access_errors.h |   60 +
 .../net/qdma/qdma_access/qdma_access_export.h |  243 +
 .../qdma/qdma_access/qdma_access_version.h    |   24 +
 drivers/net/qdma/qdma_access/qdma_list.c      |   51 +
 drivers/net/qdma/qdma_access/qdma_list.h      |  109 +
 .../net/qdma/qdma_access/qdma_mbox_protocol.c | 2107 +++++
 .../net/qdma/qdma_access/qdma_mbox_protocol.h |  681 ++
 drivers/net/qdma/qdma_access/qdma_platform.c  |  224 +
 drivers/net/qdma/qdma_access/qdma_platform.h  |  156 +
 .../net/qdma/qdma_access/qdma_platform_env.h  |   32 +
 drivers/net/qdma/qdma_access/qdma_reg_dump.h  |   77 +
 .../net/qdma/qdma_access/qdma_resource_mgmt.c |  787 ++
 .../net/qdma/qdma_access/qdma_resource_mgmt.h |  201 +
 .../qdma_s80_hard_access.c                    | 5851 ++++++++++++
 .../qdma_s80_hard_access.h                    |  266 +
 .../qdma_s80_hard_access/qdma_s80_hard_reg.h  | 2031 +++++
 .../qdma_s80_hard_reg_dump.c                  | 7999 +++++++++++++++++
 .../qdma_soft_access/qdma_soft_access.c       | 6106 +++++++++++++
 .../qdma_soft_access/qdma_soft_access.h       |  280 +
 .../qdma_soft_access/qdma_soft_reg.h          |  570 ++
 drivers/net/qdma/qdma_common.c                |  531 ++
 drivers/net/qdma/qdma_devops.c                | 2009 +++++
 drivers/net/qdma/qdma_devops.h                |  526 ++
 drivers/net/qdma/qdma_ethdev.c                |  722 ++
 drivers/net/qdma/qdma_log.h                   |   16 +
 drivers/net/qdma/qdma_mbox.c                  |  400 +
 drivers/net/qdma/qdma_mbox.h                  |   47 +
 drivers/net/qdma/qdma_rxtx.c                  | 1538 ++++
 drivers/net/qdma/qdma_rxtx.h                  |   36 +
 drivers/net/qdma/qdma_user.c                  |  263 +
 drivers/net/qdma/qdma_user.h                  |  225 +
 drivers/net/qdma/qdma_version.h               |   23 +
 drivers/net/qdma/qdma_vf_ethdev.c             | 1033 +++
 drivers/net/qdma/qdma_xdebug.c                | 1072 +++
 drivers/net/qdma/rte_pmd_qdma.c               | 1728 ++++
 drivers/net/qdma/rte_pmd_qdma.h               |  689 ++
 drivers/net/qdma/version.map                  |   38 +
 47 files changed, 52558 insertions(+)
 create mode 100644 drivers/net/qdma/meson.build
 create mode 100644 drivers/net/qdma/qdma.h
 create mode 100644 drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_access.c
 create mode 100644 drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_access.h
 create mode 100644 drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_reg.h
 create mode 100644 drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_reg_dump.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_access_common.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_access_common.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_access_errors.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_access_export.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_access_version.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_list.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_list.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_mbox_protocol.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_mbox_protocol.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_platform.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_platform.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_platform_env.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_reg_dump.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_resource_mgmt.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_resource_mgmt.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_access.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_access.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_reg.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_reg_dump.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_soft_access/qdma_soft_access.c
 create mode 100644 drivers/net/qdma/qdma_access/qdma_soft_access/qdma_soft_access.h
 create mode 100644 drivers/net/qdma/qdma_access/qdma_soft_access/qdma_soft_reg.h
 create mode 100644 drivers/net/qdma/qdma_common.c
 create mode 100644 drivers/net/qdma/qdma_devops.c
 create mode 100644 drivers/net/qdma/qdma_devops.h
 create mode 100644 drivers/net/qdma/qdma_ethdev.c
 create mode 100644 drivers/net/qdma/qdma_log.h
 create mode 100644 drivers/net/qdma/qdma_mbox.c
 create mode 100644 drivers/net/qdma/qdma_mbox.h
 create mode 100644 drivers/net/qdma/qdma_rxtx.c
 create mode 100644 drivers/net/qdma/qdma_rxtx.h
 create mode 100644 drivers/net/qdma/qdma_user.c
 create mode 100644 drivers/net/qdma/qdma_user.h
 create mode 100644 drivers/net/qdma/qdma_version.h
 create mode 100644 drivers/net/qdma/qdma_vf_ethdev.c
 create mode 100644 drivers/net/qdma/qdma_xdebug.c
 create mode 100644 drivers/net/qdma/rte_pmd_qdma.c
 create mode 100644 drivers/net/qdma/rte_pmd_qdma.h
 create mode 100644 drivers/net/qdma/version.map
  

Comments

Thomas Monjalon July 7, 2022, 6:57 a.m. UTC | #1
06/07/2022 09:51, Aman Kumar:
> This patch series provides net PMD for VVDN's NR(5G) Hi-PHY solution over
> T1 telco card. These telco accelerator NIC cards are targeted for ORAN DU
> systems to offload inline NR Hi-PHY (split 7.2) operations. For the DU host,
> the cards typically appears as a basic NIC device. The device is based on
> AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline Hi-PHY IP
> is developed by VVDN Technologies Private Limited.
> PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281
> 
> Hardware-specs:
> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf
> 
> - This series is an RFC and target for DPDK v22.11.
> - Currently, the PMD is supported only for x86_64 host.
> - Build machine used: Fedora 36 with gcc 12.1.1
> - The device communicates to host over AMD/Xilinx's QDMA subsystem for
>   PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma

That's unfortunate, there is something else called QDMA in NXP solution:
https://git.dpdk.org/dpdk/tree/drivers/dma/dpaa2
  
Aman Kumar July 7, 2022, 1:55 p.m. UTC | #2
On 07/07/22 12:27 pm, Thomas Monjalon wrote:
> 06/07/2022 09:51, Aman Kumar:
>> This patch series provides net PMD for VVDN's NR(5G) Hi-PHY solution over
>> T1 telco card. These telco accelerator NIC cards are targeted for ORAN DU
>> systems to offload inline NR Hi-PHY (split 7.2) operations. For the DU host,
>> the cards typically appears as a basic NIC device. The device is based on
>> AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline Hi-PHY IP
>> is developed by VVDN Technologies Private Limited.
>> PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281
>>
>> Hardware-specs:
>> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf
>>
>> - This series is an RFC and target for DPDK v22.11.
>> - Currently, the PMD is supported only for x86_64 host.
>> - Build machine used: Fedora 36 with gcc 12.1.1
>> - The device communicates to host over AMD/Xilinx's QDMA subsystem for
>>    PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma
> That's unfortunate, there is something else called QDMA in NXP solution:
> https://git.dpdk.org/dpdk/tree/drivers/dma/dpaa2

Is this going to create a conflict against this submission? I guess both are publicly available/known for long time.
  
Thomas Monjalon July 7, 2022, 2:15 p.m. UTC | #3
07/07/2022 15:55, Aman Kumar:
> On 07/07/22 12:27 pm, Thomas Monjalon wrote:
> > 06/07/2022 09:51, Aman Kumar:
> >> This patch series provides net PMD for VVDN's NR(5G) Hi-PHY solution over
> >> T1 telco card. These telco accelerator NIC cards are targeted for ORAN DU
> >> systems to offload inline NR Hi-PHY (split 7.2) operations. For the DU host,
> >> the cards typically appears as a basic NIC device. The device is based on
> >> AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline Hi-PHY IP
> >> is developed by VVDN Technologies Private Limited.
> >> PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281
> >>
> >> Hardware-specs:
> >> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf
> >>
> >> - This series is an RFC and target for DPDK v22.11.
> >> - Currently, the PMD is supported only for x86_64 host.
> >> - Build machine used: Fedora 36 with gcc 12.1.1
> >> - The device communicates to host over AMD/Xilinx's QDMA subsystem for
> >>    PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma
> > That's unfortunate, there is something else called QDMA in NXP solution:
> > https://git.dpdk.org/dpdk/tree/drivers/dma/dpaa2
> 
> Is this going to create a conflict against this submission? I guess both are publicly available/known for long time.

If it's the marketing name, go for it,
but it is unfortunate.
  
Hemant Agrawal July 7, 2022, 2:19 p.m. UTC | #4
On 7/7/2022 7:45 PM, Thomas Monjalon wrote:
> 07/07/2022 15:55, Aman Kumar:
>> On 07/07/22 12:27 pm, Thomas Monjalon wrote:
>>> 06/07/2022 09:51, Aman Kumar:
>>>> This patch series provides net PMD for VVDN's NR(5G) Hi-PHY solution over
>>>> T1 telco card. These telco accelerator NIC cards are targeted for ORAN DU
>>>> systems to offload inline NR Hi-PHY (split 7.2) operations. For the DU host,
>>>> the cards typically appears as a basic NIC device. The device is based on
>>>> AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline Hi-PHY IP
>>>> is developed by VVDN Technologies Private Limited.
>>>> PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281
>>>>
>>>> Hardware-specs:
>>>> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf
>>>>
>>>> - This series is an RFC and target for DPDK v22.11.
>>>> - Currently, the PMD is supported only for x86_64 host.
>>>> - Build machine used: Fedora 36 with gcc 12.1.1
>>>> - The device communicates to host over AMD/Xilinx's QDMA subsystem for
>>>>     PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma
>>> That's unfortunate, there is something else called QDMA in NXP solution:
>>> https://git.dpdk.org/dpdk/tree/drivers/dma/dpaa2
>> Is this going to create a conflict against this submission? I guess both are publicly available/known for long time.
> If it's the marketing name, go for it,
> but it is unfortunate.

QDMA is a very generic name and many vendors have IP for it.

My suggestions is the qualify the specific driver with vendor name i.e. 
amd_qdma or xilinx_qdma or something similar.

NXP also did the same dpaa2_qdma



>
  
Aman Kumar July 18, 2022, 6:15 p.m. UTC | #5
On 07/07/22 7:49 pm, Hemant Agrawal <hemant.agrawal@oss.nxp.com> wrote:
> 
> On 7/7/2022 7:45 PM, Thomas Monjalon wrote:
> > 07/07/2022 15:55, Aman Kumar:
> >> On 07/07/22 12:27 pm, Thomas Monjalon wrote:
> >>> 06/07/2022 09:51, Aman Kumar:
> >>>> This patch series provides net PMD for VVDN's NR(5G) Hi-PHY 
> >>>> solution over
> >>>> T1 telco card. These telco accelerator NIC cards are targeted for 
> >>>> ORAN DU
> >>>> systems to offload inline NR Hi-PHY (split 7.2) operations. For the 
> >>>> DU host,
> >>>> the cards typically appears as a basic NIC device. The device is 
> >>>> based on
> >>>> AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline 
> >>>> Hi-PHY IP
> >>>> is developed by VVDN Technologies Private Limited.
> >>>> PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281
> >>>>
> >>>> Hardware-specs:
> >>>> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf 
> >>>>
> >>>>
> >>>> - This series is an RFC and target for DPDK v22.11.
> >>>> - Currently, the PMD is supported only for x86_64 host.
> >>>> - Build machine used: Fedora 36 with gcc 12.1.1
> >>>> - The device communicates to host over AMD/Xilinx's QDMA subsystem for
> >>>>     PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma
> >>> That's unfortunate, there is something else called QDMA in NXP 
> >>> solution:
> >>> https://git.dpdk.org/dpdk/tree/drivers/dma/dpaa2
> >> Is this going to create a conflict against this submission? I guess 
> >> both are publicly available/known for long time.
> > If it's the marketing name, go for it,
> > but it is unfortunate.
> 
> QDMA is a very generic name and many vendors have IP for it.
> 
> My suggestions is the qualify the specific driver with vendor name i.e. 
> amd_qdma or xilinx_qdma or something similar.
> 
> NXP also did the same dpaa2_qdma
> 
> 
@Thomas, @Hemant,
Thank you for highlights and suggestions regarding conflicting names.
We've discussed this internally and came up with below plan.

For v22.11 DPDK, we would like to submit patches with below renames:
 drivers/net/qdma -> drivers/net/t1
 drivers/net/qdma/qdma_*.c/h -> drivers/net/t1/t1_*.c/h
 driver/net/qdma/qdma_access -> driver/net/t1/base

We've plan to split Xilinx QDMA library to drivers/dma/xilinx_dma/* or into drivers/common/* around v23.02 as it require a big rework currently.
Also, currently no other devices are dependent on Xilinx QDMA, we would like to submit with "renamed" items as mentioned above under drivers/net/*.
We've a plan to submit a bbdev device too early next year and rework is planned before submitting that(post v22.11).
I'll update this in v2 patch. I hope this is OK.
  
Thomas Monjalon July 19, 2022, 12:12 p.m. UTC | #6
18/07/2022 20:15, aman.kumar@vvdntech.in:
> On 07/07/22 7:49 pm, Hemant Agrawal <hemant.agrawal@oss.nxp.com> wrote:
> > 
> > On 7/7/2022 7:45 PM, Thomas Monjalon wrote:
> > > 07/07/2022 15:55, Aman Kumar:
> > >> On 07/07/22 12:27 pm, Thomas Monjalon wrote:
> > >>> 06/07/2022 09:51, Aman Kumar:
> > >>>> This patch series provides net PMD for VVDN's NR(5G) Hi-PHY 
> > >>>> solution over
> > >>>> T1 telco card. These telco accelerator NIC cards are targeted for 
> > >>>> ORAN DU
> > >>>> systems to offload inline NR Hi-PHY (split 7.2) operations. For the 
> > >>>> DU host,
> > >>>> the cards typically appears as a basic NIC device. The device is 
> > >>>> based on
> > >>>> AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline 
> > >>>> Hi-PHY IP
> > >>>> is developed by VVDN Technologies Private Limited.
> > >>>> PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281
> > >>>>
> > >>>> Hardware-specs:
> > >>>> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf 
> > >>>>
> > >>>>
> > >>>> - This series is an RFC and target for DPDK v22.11.
> > >>>> - Currently, the PMD is supported only for x86_64 host.
> > >>>> - Build machine used: Fedora 36 with gcc 12.1.1
> > >>>> - The device communicates to host over AMD/Xilinx's QDMA subsystem for
> > >>>>     PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma
> > >>> That's unfortunate, there is something else called QDMA in NXP 
> > >>> solution:
> > >>> https://git.dpdk.org/dpdk/tree/drivers/dma/dpaa2
> > >> Is this going to create a conflict against this submission? I guess 
> > >> both are publicly available/known for long time.
> > > If it's the marketing name, go for it,
> > > but it is unfortunate.
> > 
> > QDMA is a very generic name and many vendors have IP for it.
> > 
> > My suggestions is the qualify the specific driver with vendor name i.e. 
> > amd_qdma or xilinx_qdma or something similar.
> > 
> > NXP also did the same dpaa2_qdma
> > 
> > 
> @Thomas, @Hemant,
> Thank you for highlights and suggestions regarding conflicting names.
> We've discussed this internally and came up with below plan.
> 
> For v22.11 DPDK, we would like to submit patches with below renames:
>  drivers/net/qdma -> drivers/net/t1
>  drivers/net/qdma/qdma_*.c/h -> drivers/net/t1/t1_*.c/h
>  driver/net/qdma/qdma_access -> driver/net/t1/base

Curious why "t1"?
What is the meaning?
  
Aman Kumar July 19, 2022, 5:22 p.m. UTC | #7
On 19/07/22 5:42 pm, Thomas Monjalon <thomas@monjalon.net> wrote:
> 18/07/2022 20:15, aman.kumar@vvdntech.in:
> > On 07/07/22 7:49 pm, Hemant Agrawal <hemant.agrawal@oss.nxp.com> wrote:
> >>
> >> On 7/7/2022 7:45 PM, Thomas Monjalon wrote:
> >>> 07/07/2022 15:55, Aman Kumar:
> >>>> On 07/07/22 12:27 pm, Thomas Monjalon wrote:
> >>>>> 06/07/2022 09:51, Aman Kumar:
> >>>>>> This patch series provides net PMD for VVDN's NR(5G) Hi-PHY
> >>>>>> solution over
> >>>>>> T1 telco card. These telco accelerator NIC cards are targeted for
> >>>>>> ORAN DU
> >>>>>> systems to offload inline NR Hi-PHY (split 7.2) operations. For the
> >>>>>> DU host,
> >>>>>> the cards typically appears as a basic NIC device. The device is
> >>>>>> based on
> >>>>>> AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline
> >>>>>> Hi-PHY IP
> >>>>>> is developed by VVDN Technologies Private Limited.
> >>>>>> PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281
> >>>>>>
> >>>>>> Hardware-specs:
> >>>>>> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf
> >>>>>>
> >>>>>>
> >>>>>> - This series is an RFC and target for DPDK v22.11.
> >>>>>> - Currently, the PMD is supported only for x86_64 host.
> >>>>>> - Build machine used: Fedora 36 with gcc 12.1.1
> >>>>>> - The device communicates to host over AMD/Xilinx's QDMA subsystem for
> >>>>>>      PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma
> >>>>> That's unfortunate, there is something else called QDMA in NXP
> >>>>> solution:
> >>>>> https://git.dpdk.org/dpdk/tree/drivers/dma/dpaa2
> >>>> Is this going to create a conflict against this submission? I guess
> >>>> both are publicly available/known for long time.
> >>> If it's the marketing name, go for it,
> >>> but it is unfortunate.
> >>
> >> QDMA is a very generic name and many vendors have IP for it.
> >>
> >> My suggestions is the qualify the specific driver with vendor name i.e.
> >> amd_qdma or xilinx_qdma or something similar.
> >>
> >> NXP also did the same dpaa2_qdma
> >>
> >>
> > @Thomas, @Hemant,
> > Thank you for highlights and suggestions regarding conflicting names.
> > We've discussed this internally and came up with below plan.
> >
> > For v22.11 DPDK, we would like to submit patches with below renames:
> >   drivers/net/qdma -> drivers/net/t1
> >   drivers/net/qdma/qdma_*.c/h -> drivers/net/t1/t1_*.c/h
> >   driver/net/qdma/qdma_access -> driver/net/t1/base
> 
> Curious why "t1"?
> What is the meaning?

The hardware is commerically named as the "T1 card". T1 represents first card of the telco accelerator series.
https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf
  
Stephen Hemminger July 2, 2023, 11:36 p.m. UTC | #8
On Tue, 19 Jul 2022 22:52:20 +0530
aman.kumar@vvdntech.in wrote:

> > >>
> > >> QDMA is a very generic name and many vendors have IP for it.
> > >>
> > >> My suggestions is the qualify the specific driver with vendor name i.e.
> > >> amd_qdma or xilinx_qdma or something similar.
> > >>
> > >> NXP also did the same dpaa2_qdma
> > >>
> > >>  
> > > @Thomas, @Hemant,
> > > Thank you for highlights and suggestions regarding conflicting names.
> > > We've discussed this internally and came up with below plan.
> > >
> > > For v22.11 DPDK, we would like to submit patches with below renames:
> > >   drivers/net/qdma -> drivers/net/t1
> > >   drivers/net/qdma/qdma_*.c/h -> drivers/net/t1/t1_*.c/h
> > >   driver/net/qdma/qdma_access -> driver/net/t1/base  
> > 
> > Curious why "t1"?
> > What is the meaning?  
> 
> The hardware is commerically named as the "T1 card". T1 represents first card of the telco accelerator series.
> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf
> 

The discussion around this driver stalled. Either hardware abandoned, renamed, or there was no interest.
If you want to get this in a future DPDK release, please resubmit a new version addressing review comments.
  
Ferruh Yigit July 3, 2023, 9:15 a.m. UTC | #9
On 7/3/2023 12:36 AM, Stephen Hemminger wrote:
> On Tue, 19 Jul 2022 22:52:20 +0530
> aman.kumar@vvdntech.in wrote:
> 
>>>>>
>>>>> QDMA is a very generic name and many vendors have IP for it.
>>>>>
>>>>> My suggestions is the qualify the specific driver with vendor name i.e.
>>>>> amd_qdma or xilinx_qdma or something similar.
>>>>>
>>>>> NXP also did the same dpaa2_qdma
>>>>>
>>>>>  
>>>> @Thomas, @Hemant,
>>>> Thank you for highlights and suggestions regarding conflicting names.
>>>> We've discussed this internally and came up with below plan.
>>>>
>>>> For v22.11 DPDK, we would like to submit patches with below renames:
>>>>   drivers/net/qdma -> drivers/net/t1
>>>>   drivers/net/qdma/qdma_*.c/h -> drivers/net/t1/t1_*.c/h
>>>>   driver/net/qdma/qdma_access -> driver/net/t1/base  
>>>
>>> Curious why "t1"?
>>> What is the meaning?  
>>
>> The hardware is commerically named as the "T1 card". T1 represents first card of the telco accelerator series.
>> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf
>>
> 
> The discussion around this driver stalled. Either hardware abandoned, renamed, or there was no interest.
> If you want to get this in a future DPDK release, please resubmit a new version addressing review comments.
>

Hi Stephen,

I can confirm that this work is stalled, and it is not clear yet if
there will be a new version or not, so OK to clean this from patchwork.

Thanks,
ferruh