Message ID | 20220706075219.517046-1-aman.kumar@vvdntech.in (mailing list archive) |
---|---|
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 93D03A0540; Wed, 6 Jul 2022 09:56:03 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8804440A7F; Wed, 6 Jul 2022 09:56:02 +0200 (CEST) Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) by mails.dpdk.org (Postfix) with ESMTP id 6815F40691 for <dev@dpdk.org>; Wed, 6 Jul 2022 09:56:01 +0200 (CEST) Received: by mail-pg1-f176.google.com with SMTP id z14so13451497pgh.0 for <dev@dpdk.org>; Wed, 06 Jul 2022 00:56:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vvdntech-in.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=/GQN7ercRhu66gUCX+F4lgWUCQupjcDbKjuVpVt/aqA=; b=mLdIvQGj4dmL1jouzZtdLa65+RLMoYJ34BwWN076SIhUSoYAY8My17IcL67lrXgMPP fSaIZyEO/qQ6YOzqdUMj8hNIwblzQe0ZfM5Cz593rKYSm89z6tpOymQPqEhIAO+SYRfL AvVVHOCnVhWyrPfvQ75Zmuh9rWntVlBlixzo6BqQUxDtYgUjw/jOmTOCIBZmHdZ4byQ7 RvhfKNpq/9xqhxNJYkjY0osS8naJM78Kg2H4Cb1fCOVgXBszvN549xW/SGen1FBVroJc SSrgJM68zsPpDb6GV+FvKWGC974GaIhXH0MYjwwBFOeWoQMhZ9SImCpxUCmkBXD22hON X5IQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=/GQN7ercRhu66gUCX+F4lgWUCQupjcDbKjuVpVt/aqA=; b=GiWZ5+fUvtT4PLdADZP19S7n9IiPbmXD66NS9pg87v4w9mkMoEPTzl3mCoh6sqXi7f f5J78mo7j7H6XYitowjWnNJSPsFzDX/LHv0lbpDxWgl4hmpF43Vs3ehlxknDQDvC9Nut zbQg+DNghX+rFk05khQrnkWEzh0LQvKDfexGTu6oI5FRWn5VaIurLgKH0ZF1Yf6NqnOu uZ59JHkFahbf+M6FS4vvdBYofHKazW/9eOIGZO/g6tHIIoedv3yAnTpnzbhHL74c7RVn +/+QHZliJh/IK4QQKqr80VTSs1zf4l8FOCDexsxRnp+Xjo15v9a2IbQ/F+LV5AxKWZba eY6Q== X-Gm-Message-State: AJIora8946jqOE+6b/VTlCXwmAny066lVicQr5UEpHANsJg4RN9HvIJV jD1FlYAtrQtNBKWYetfe2bxzIy1iD6uC5RXa X-Google-Smtp-Source: AGRyM1ueb5SJpJ47ezVXZvvzpTqmoXSLEZMhL1D5w/8C0bQqbPFtpTnaMUjmaD/r0VAXVsK13mREMA== X-Received: by 2002:a63:7418:0:b0:40c:fb51:13a0 with SMTP id p24-20020a637418000000b0040cfb5113a0mr33185507pgc.492.1657094160329; Wed, 06 Jul 2022 00:56:00 -0700 (PDT) Received: from 470--5GDC--BLR.blore.vvdntech.com ([106.51.39.131]) by smtp.gmail.com with ESMTPSA id r4-20020a17090a438400b001ef81574355sm7378805pjg.12.2022.07.06.00.55.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Jul 2022 00:56:00 -0700 (PDT) From: Aman Kumar <aman.kumar@vvdntech.in> To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, david.marchand@redhat.com, aman.kumar@vvdntech.in Subject: [RFC PATCH 00/29] cover letter for net/qdma PMD Date: Wed, 6 Jul 2022 13:21:50 +0530 Message-Id: <20220706075219.517046-1-aman.kumar@vvdntech.in> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions <dev.dpdk.org> List-Unsubscribe: <https://mails.dpdk.org/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://mails.dpdk.org/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <https://mails.dpdk.org/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org |
Series |
cover letter for net/qdma PMD
|
|
Message
Aman Kumar
July 6, 2022, 7:51 a.m. UTC
This patch series provides net PMD for VVDN's NR(5G) Hi-PHY solution over T1 telco card. These telco accelerator NIC cards are targeted for ORAN DU systems to offload inline NR Hi-PHY (split 7.2) operations. For the DU host, the cards typically appears as a basic NIC device. The device is based on AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline Hi-PHY IP is developed by VVDN Technologies Private Limited. PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281 Hardware-specs: https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf - This series is an RFC and target for DPDK v22.11. - Currently, the PMD is supported only for x86_64 host. - Build machine used: Fedora 36 with gcc 12.1.1 - The device communicates to host over AMD/Xilinx's QDMA subsystem for PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma - The QDMA access library is part of PMD [PATCH 06] - The DPDK version of documents (doc/guide/nics/*) for this device is WIP and will be included in the next version of patchset. Aman Kumar (29): net/qdma: add net PMD template maintainers: add maintainer for net/qdma PMD net/meson.build: add support to compile net qdma net/qdma: add logging support net/qdma: add device init and uninit functions net/qdma: add qdma access library net/qdma: add supported qdma version net/qdma: qdma hardware initialization net/qdma: define device modes and data structure net/qdma: add net PMD ops template net/qdma: add configure close and reset ethdev ops net/qdma: add routine for Rx queue initialization net/qdma: add callback support for Rx queue count net/qdma: add routine for Tx queue initialization net/qdma: add queue cleanup PMD ops net/qdma: add start and stop apis net/qdma: add Tx burst API net/qdma: add Tx queue reclaim routine net/qdma: add callback function for Tx desc status net/qdma: add Rx burst API net/qdma: add mailbox communication library net/qdma: mbox API adaptation in Rx/Tx init net/qdma: add support for VF interfaces net/qdma: add Rx/Tx queue setup routine for VF devices net/qdma: add basic PMD ops for VF net/qdma: add datapath burst API for VF net/qdma: add device specific APIs for export net/qdma: add additional debug APIs net/qdma: add stats PMD ops for PF and VF MAINTAINERS | 4 + drivers/net/meson.build | 1 + drivers/net/qdma/meson.build | 44 + drivers/net/qdma/qdma.h | 354 + .../eqdma_soft_access/eqdma_soft_access.c | 5832 ++++++++++++ .../eqdma_soft_access/eqdma_soft_access.h | 294 + .../eqdma_soft_access/eqdma_soft_reg.h | 1211 +++ .../eqdma_soft_access/eqdma_soft_reg_dump.c | 3908 ++++++++ .../net/qdma/qdma_access/qdma_access_common.c | 1271 +++ .../net/qdma/qdma_access/qdma_access_common.h | 888 ++ .../net/qdma/qdma_access/qdma_access_errors.h | 60 + .../net/qdma/qdma_access/qdma_access_export.h | 243 + .../qdma/qdma_access/qdma_access_version.h | 24 + drivers/net/qdma/qdma_access/qdma_list.c | 51 + drivers/net/qdma/qdma_access/qdma_list.h | 109 + .../net/qdma/qdma_access/qdma_mbox_protocol.c | 2107 +++++ .../net/qdma/qdma_access/qdma_mbox_protocol.h | 681 ++ drivers/net/qdma/qdma_access/qdma_platform.c | 224 + drivers/net/qdma/qdma_access/qdma_platform.h | 156 + .../net/qdma/qdma_access/qdma_platform_env.h | 32 + drivers/net/qdma/qdma_access/qdma_reg_dump.h | 77 + .../net/qdma/qdma_access/qdma_resource_mgmt.c | 787 ++ .../net/qdma/qdma_access/qdma_resource_mgmt.h | 201 + .../qdma_s80_hard_access.c | 5851 ++++++++++++ .../qdma_s80_hard_access.h | 266 + .../qdma_s80_hard_access/qdma_s80_hard_reg.h | 2031 +++++ .../qdma_s80_hard_reg_dump.c | 7999 +++++++++++++++++ .../qdma_soft_access/qdma_soft_access.c | 6106 +++++++++++++ .../qdma_soft_access/qdma_soft_access.h | 280 + .../qdma_soft_access/qdma_soft_reg.h | 570 ++ drivers/net/qdma/qdma_common.c | 531 ++ drivers/net/qdma/qdma_devops.c | 2009 +++++ drivers/net/qdma/qdma_devops.h | 526 ++ drivers/net/qdma/qdma_ethdev.c | 722 ++ drivers/net/qdma/qdma_log.h | 16 + drivers/net/qdma/qdma_mbox.c | 400 + drivers/net/qdma/qdma_mbox.h | 47 + drivers/net/qdma/qdma_rxtx.c | 1538 ++++ drivers/net/qdma/qdma_rxtx.h | 36 + drivers/net/qdma/qdma_user.c | 263 + drivers/net/qdma/qdma_user.h | 225 + drivers/net/qdma/qdma_version.h | 23 + drivers/net/qdma/qdma_vf_ethdev.c | 1033 +++ drivers/net/qdma/qdma_xdebug.c | 1072 +++ drivers/net/qdma/rte_pmd_qdma.c | 1728 ++++ drivers/net/qdma/rte_pmd_qdma.h | 689 ++ drivers/net/qdma/version.map | 38 + 47 files changed, 52558 insertions(+) create mode 100644 drivers/net/qdma/meson.build create mode 100644 drivers/net/qdma/qdma.h create mode 100644 drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_access.c create mode 100644 drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_access.h create mode 100644 drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_reg.h create mode 100644 drivers/net/qdma/qdma_access/eqdma_soft_access/eqdma_soft_reg_dump.c create mode 100644 drivers/net/qdma/qdma_access/qdma_access_common.c create mode 100644 drivers/net/qdma/qdma_access/qdma_access_common.h create mode 100644 drivers/net/qdma/qdma_access/qdma_access_errors.h create mode 100644 drivers/net/qdma/qdma_access/qdma_access_export.h create mode 100644 drivers/net/qdma/qdma_access/qdma_access_version.h create mode 100644 drivers/net/qdma/qdma_access/qdma_list.c create mode 100644 drivers/net/qdma/qdma_access/qdma_list.h create mode 100644 drivers/net/qdma/qdma_access/qdma_mbox_protocol.c create mode 100644 drivers/net/qdma/qdma_access/qdma_mbox_protocol.h create mode 100644 drivers/net/qdma/qdma_access/qdma_platform.c create mode 100644 drivers/net/qdma/qdma_access/qdma_platform.h create mode 100644 drivers/net/qdma/qdma_access/qdma_platform_env.h create mode 100644 drivers/net/qdma/qdma_access/qdma_reg_dump.h create mode 100644 drivers/net/qdma/qdma_access/qdma_resource_mgmt.c create mode 100644 drivers/net/qdma/qdma_access/qdma_resource_mgmt.h create mode 100644 drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_access.c create mode 100644 drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_access.h create mode 100644 drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_reg.h create mode 100644 drivers/net/qdma/qdma_access/qdma_s80_hard_access/qdma_s80_hard_reg_dump.c create mode 100644 drivers/net/qdma/qdma_access/qdma_soft_access/qdma_soft_access.c create mode 100644 drivers/net/qdma/qdma_access/qdma_soft_access/qdma_soft_access.h create mode 100644 drivers/net/qdma/qdma_access/qdma_soft_access/qdma_soft_reg.h create mode 100644 drivers/net/qdma/qdma_common.c create mode 100644 drivers/net/qdma/qdma_devops.c create mode 100644 drivers/net/qdma/qdma_devops.h create mode 100644 drivers/net/qdma/qdma_ethdev.c create mode 100644 drivers/net/qdma/qdma_log.h create mode 100644 drivers/net/qdma/qdma_mbox.c create mode 100644 drivers/net/qdma/qdma_mbox.h create mode 100644 drivers/net/qdma/qdma_rxtx.c create mode 100644 drivers/net/qdma/qdma_rxtx.h create mode 100644 drivers/net/qdma/qdma_user.c create mode 100644 drivers/net/qdma/qdma_user.h create mode 100644 drivers/net/qdma/qdma_version.h create mode 100644 drivers/net/qdma/qdma_vf_ethdev.c create mode 100644 drivers/net/qdma/qdma_xdebug.c create mode 100644 drivers/net/qdma/rte_pmd_qdma.c create mode 100644 drivers/net/qdma/rte_pmd_qdma.h create mode 100644 drivers/net/qdma/version.map
Comments
06/07/2022 09:51, Aman Kumar: > This patch series provides net PMD for VVDN's NR(5G) Hi-PHY solution over > T1 telco card. These telco accelerator NIC cards are targeted for ORAN DU > systems to offload inline NR Hi-PHY (split 7.2) operations. For the DU host, > the cards typically appears as a basic NIC device. The device is based on > AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline Hi-PHY IP > is developed by VVDN Technologies Private Limited. > PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281 > > Hardware-specs: > https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf > > - This series is an RFC and target for DPDK v22.11. > - Currently, the PMD is supported only for x86_64 host. > - Build machine used: Fedora 36 with gcc 12.1.1 > - The device communicates to host over AMD/Xilinx's QDMA subsystem for > PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma That's unfortunate, there is something else called QDMA in NXP solution: https://git.dpdk.org/dpdk/tree/drivers/dma/dpaa2
On 07/07/22 12:27 pm, Thomas Monjalon wrote: > 06/07/2022 09:51, Aman Kumar: >> This patch series provides net PMD for VVDN's NR(5G) Hi-PHY solution over >> T1 telco card. These telco accelerator NIC cards are targeted for ORAN DU >> systems to offload inline NR Hi-PHY (split 7.2) operations. For the DU host, >> the cards typically appears as a basic NIC device. The device is based on >> AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline Hi-PHY IP >> is developed by VVDN Technologies Private Limited. >> PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281 >> >> Hardware-specs: >> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf >> >> - This series is an RFC and target for DPDK v22.11. >> - Currently, the PMD is supported only for x86_64 host. >> - Build machine used: Fedora 36 with gcc 12.1.1 >> - The device communicates to host over AMD/Xilinx's QDMA subsystem for >> PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma > That's unfortunate, there is something else called QDMA in NXP solution: > https://git.dpdk.org/dpdk/tree/drivers/dma/dpaa2 Is this going to create a conflict against this submission? I guess both are publicly available/known for long time.
07/07/2022 15:55, Aman Kumar: > On 07/07/22 12:27 pm, Thomas Monjalon wrote: > > 06/07/2022 09:51, Aman Kumar: > >> This patch series provides net PMD for VVDN's NR(5G) Hi-PHY solution over > >> T1 telco card. These telco accelerator NIC cards are targeted for ORAN DU > >> systems to offload inline NR Hi-PHY (split 7.2) operations. For the DU host, > >> the cards typically appears as a basic NIC device. The device is based on > >> AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline Hi-PHY IP > >> is developed by VVDN Technologies Private Limited. > >> PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281 > >> > >> Hardware-specs: > >> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf > >> > >> - This series is an RFC and target for DPDK v22.11. > >> - Currently, the PMD is supported only for x86_64 host. > >> - Build machine used: Fedora 36 with gcc 12.1.1 > >> - The device communicates to host over AMD/Xilinx's QDMA subsystem for > >> PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma > > That's unfortunate, there is something else called QDMA in NXP solution: > > https://git.dpdk.org/dpdk/tree/drivers/dma/dpaa2 > > Is this going to create a conflict against this submission? I guess both are publicly available/known for long time. If it's the marketing name, go for it, but it is unfortunate.
On 7/7/2022 7:45 PM, Thomas Monjalon wrote: > 07/07/2022 15:55, Aman Kumar: >> On 07/07/22 12:27 pm, Thomas Monjalon wrote: >>> 06/07/2022 09:51, Aman Kumar: >>>> This patch series provides net PMD for VVDN's NR(5G) Hi-PHY solution over >>>> T1 telco card. These telco accelerator NIC cards are targeted for ORAN DU >>>> systems to offload inline NR Hi-PHY (split 7.2) operations. For the DU host, >>>> the cards typically appears as a basic NIC device. The device is based on >>>> AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline Hi-PHY IP >>>> is developed by VVDN Technologies Private Limited. >>>> PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281 >>>> >>>> Hardware-specs: >>>> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf >>>> >>>> - This series is an RFC and target for DPDK v22.11. >>>> - Currently, the PMD is supported only for x86_64 host. >>>> - Build machine used: Fedora 36 with gcc 12.1.1 >>>> - The device communicates to host over AMD/Xilinx's QDMA subsystem for >>>> PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma >>> That's unfortunate, there is something else called QDMA in NXP solution: >>> https://git.dpdk.org/dpdk/tree/drivers/dma/dpaa2 >> Is this going to create a conflict against this submission? I guess both are publicly available/known for long time. > If it's the marketing name, go for it, > but it is unfortunate. QDMA is a very generic name and many vendors have IP for it. My suggestions is the qualify the specific driver with vendor name i.e. amd_qdma or xilinx_qdma or something similar. NXP also did the same dpaa2_qdma >
On 07/07/22 7:49 pm, Hemant Agrawal <hemant.agrawal@oss.nxp.com> wrote: > > On 7/7/2022 7:45 PM, Thomas Monjalon wrote: > > 07/07/2022 15:55, Aman Kumar: > >> On 07/07/22 12:27 pm, Thomas Monjalon wrote: > >>> 06/07/2022 09:51, Aman Kumar: > >>>> This patch series provides net PMD for VVDN's NR(5G) Hi-PHY > >>>> solution over > >>>> T1 telco card. These telco accelerator NIC cards are targeted for > >>>> ORAN DU > >>>> systems to offload inline NR Hi-PHY (split 7.2) operations. For the > >>>> DU host, > >>>> the cards typically appears as a basic NIC device. The device is > >>>> based on > >>>> AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline > >>>> Hi-PHY IP > >>>> is developed by VVDN Technologies Private Limited. > >>>> PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281 > >>>> > >>>> Hardware-specs: > >>>> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf > >>>> > >>>> > >>>> - This series is an RFC and target for DPDK v22.11. > >>>> - Currently, the PMD is supported only for x86_64 host. > >>>> - Build machine used: Fedora 36 with gcc 12.1.1 > >>>> - The device communicates to host over AMD/Xilinx's QDMA subsystem for > >>>> PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma > >>> That's unfortunate, there is something else called QDMA in NXP > >>> solution: > >>> https://git.dpdk.org/dpdk/tree/drivers/dma/dpaa2 > >> Is this going to create a conflict against this submission? I guess > >> both are publicly available/known for long time. > > If it's the marketing name, go for it, > > but it is unfortunate. > > QDMA is a very generic name and many vendors have IP for it. > > My suggestions is the qualify the specific driver with vendor name i.e. > amd_qdma or xilinx_qdma or something similar. > > NXP also did the same dpaa2_qdma > > @Thomas, @Hemant, Thank you for highlights and suggestions regarding conflicting names. We've discussed this internally and came up with below plan. For v22.11 DPDK, we would like to submit patches with below renames: drivers/net/qdma -> drivers/net/t1 drivers/net/qdma/qdma_*.c/h -> drivers/net/t1/t1_*.c/h driver/net/qdma/qdma_access -> driver/net/t1/base We've plan to split Xilinx QDMA library to drivers/dma/xilinx_dma/* or into drivers/common/* around v23.02 as it require a big rework currently. Also, currently no other devices are dependent on Xilinx QDMA, we would like to submit with "renamed" items as mentioned above under drivers/net/*. We've a plan to submit a bbdev device too early next year and rework is planned before submitting that(post v22.11). I'll update this in v2 patch. I hope this is OK.
18/07/2022 20:15, aman.kumar@vvdntech.in: > On 07/07/22 7:49 pm, Hemant Agrawal <hemant.agrawal@oss.nxp.com> wrote: > > > > On 7/7/2022 7:45 PM, Thomas Monjalon wrote: > > > 07/07/2022 15:55, Aman Kumar: > > >> On 07/07/22 12:27 pm, Thomas Monjalon wrote: > > >>> 06/07/2022 09:51, Aman Kumar: > > >>>> This patch series provides net PMD for VVDN's NR(5G) Hi-PHY > > >>>> solution over > > >>>> T1 telco card. These telco accelerator NIC cards are targeted for > > >>>> ORAN DU > > >>>> systems to offload inline NR Hi-PHY (split 7.2) operations. For the > > >>>> DU host, > > >>>> the cards typically appears as a basic NIC device. The device is > > >>>> based on > > >>>> AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline > > >>>> Hi-PHY IP > > >>>> is developed by VVDN Technologies Private Limited. > > >>>> PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281 > > >>>> > > >>>> Hardware-specs: > > >>>> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf > > >>>> > > >>>> > > >>>> - This series is an RFC and target for DPDK v22.11. > > >>>> - Currently, the PMD is supported only for x86_64 host. > > >>>> - Build machine used: Fedora 36 with gcc 12.1.1 > > >>>> - The device communicates to host over AMD/Xilinx's QDMA subsystem for > > >>>> PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma > > >>> That's unfortunate, there is something else called QDMA in NXP > > >>> solution: > > >>> https://git.dpdk.org/dpdk/tree/drivers/dma/dpaa2 > > >> Is this going to create a conflict against this submission? I guess > > >> both are publicly available/known for long time. > > > If it's the marketing name, go for it, > > > but it is unfortunate. > > > > QDMA is a very generic name and many vendors have IP for it. > > > > My suggestions is the qualify the specific driver with vendor name i.e. > > amd_qdma or xilinx_qdma or something similar. > > > > NXP also did the same dpaa2_qdma > > > > > @Thomas, @Hemant, > Thank you for highlights and suggestions regarding conflicting names. > We've discussed this internally and came up with below plan. > > For v22.11 DPDK, we would like to submit patches with below renames: > drivers/net/qdma -> drivers/net/t1 > drivers/net/qdma/qdma_*.c/h -> drivers/net/t1/t1_*.c/h > driver/net/qdma/qdma_access -> driver/net/t1/base Curious why "t1"? What is the meaning?
On 19/07/22 5:42 pm, Thomas Monjalon <thomas@monjalon.net> wrote: > 18/07/2022 20:15, aman.kumar@vvdntech.in: > > On 07/07/22 7:49 pm, Hemant Agrawal <hemant.agrawal@oss.nxp.com> wrote: > >> > >> On 7/7/2022 7:45 PM, Thomas Monjalon wrote: > >>> 07/07/2022 15:55, Aman Kumar: > >>>> On 07/07/22 12:27 pm, Thomas Monjalon wrote: > >>>>> 06/07/2022 09:51, Aman Kumar: > >>>>>> This patch series provides net PMD for VVDN's NR(5G) Hi-PHY > >>>>>> solution over > >>>>>> T1 telco card. These telco accelerator NIC cards are targeted for > >>>>>> ORAN DU > >>>>>> systems to offload inline NR Hi-PHY (split 7.2) operations. For the > >>>>>> DU host, > >>>>>> the cards typically appears as a basic NIC device. The device is > >>>>>> based on > >>>>>> AMD/Xilinx's Ultrasale MPSoC and RFSoC FPGA for which the inline > >>>>>> Hi-PHY IP > >>>>>> is developed by VVDN Technologies Private Limited. > >>>>>> PCI_VENDOR: 0x1f44, Supported Devices: 0x0201, 0x0281 > >>>>>> > >>>>>> Hardware-specs: > >>>>>> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf > >>>>>> > >>>>>> > >>>>>> - This series is an RFC and target for DPDK v22.11. > >>>>>> - Currently, the PMD is supported only for x86_64 host. > >>>>>> - Build machine used: Fedora 36 with gcc 12.1.1 > >>>>>> - The device communicates to host over AMD/Xilinx's QDMA subsystem for > >>>>>> PCIe interface. Link: https://docs.xilinx.com/r/en-US/pg302-qdma > >>>>> That's unfortunate, there is something else called QDMA in NXP > >>>>> solution: > >>>>> https://git.dpdk.org/dpdk/tree/drivers/dma/dpaa2 > >>>> Is this going to create a conflict against this submission? I guess > >>>> both are publicly available/known for long time. > >>> If it's the marketing name, go for it, > >>> but it is unfortunate. > >> > >> QDMA is a very generic name and many vendors have IP for it. > >> > >> My suggestions is the qualify the specific driver with vendor name i.e. > >> amd_qdma or xilinx_qdma or something similar. > >> > >> NXP also did the same dpaa2_qdma > >> > >> > > @Thomas, @Hemant, > > Thank you for highlights and suggestions regarding conflicting names. > > We've discussed this internally and came up with below plan. > > > > For v22.11 DPDK, we would like to submit patches with below renames: > > drivers/net/qdma -> drivers/net/t1 > > drivers/net/qdma/qdma_*.c/h -> drivers/net/t1/t1_*.c/h > > driver/net/qdma/qdma_access -> driver/net/t1/base > > Curious why "t1"? > What is the meaning? The hardware is commerically named as the "T1 card". T1 represents first card of the telco accelerator series. https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf
On Tue, 19 Jul 2022 22:52:20 +0530 aman.kumar@vvdntech.in wrote: > > >> > > >> QDMA is a very generic name and many vendors have IP for it. > > >> > > >> My suggestions is the qualify the specific driver with vendor name i.e. > > >> amd_qdma or xilinx_qdma or something similar. > > >> > > >> NXP also did the same dpaa2_qdma > > >> > > >> > > > @Thomas, @Hemant, > > > Thank you for highlights and suggestions regarding conflicting names. > > > We've discussed this internally and came up with below plan. > > > > > > For v22.11 DPDK, we would like to submit patches with below renames: > > > drivers/net/qdma -> drivers/net/t1 > > > drivers/net/qdma/qdma_*.c/h -> drivers/net/t1/t1_*.c/h > > > driver/net/qdma/qdma_access -> driver/net/t1/base > > > > Curious why "t1"? > > What is the meaning? > > The hardware is commerically named as the "T1 card". T1 represents first card of the telco accelerator series. > https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf > The discussion around this driver stalled. Either hardware abandoned, renamed, or there was no interest. If you want to get this in a future DPDK release, please resubmit a new version addressing review comments.
On 7/3/2023 12:36 AM, Stephen Hemminger wrote: > On Tue, 19 Jul 2022 22:52:20 +0530 > aman.kumar@vvdntech.in wrote: > >>>>> >>>>> QDMA is a very generic name and many vendors have IP for it. >>>>> >>>>> My suggestions is the qualify the specific driver with vendor name i.e. >>>>> amd_qdma or xilinx_qdma or something similar. >>>>> >>>>> NXP also did the same dpaa2_qdma >>>>> >>>>> >>>> @Thomas, @Hemant, >>>> Thank you for highlights and suggestions regarding conflicting names. >>>> We've discussed this internally and came up with below plan. >>>> >>>> For v22.11 DPDK, we would like to submit patches with below renames: >>>> drivers/net/qdma -> drivers/net/t1 >>>> drivers/net/qdma/qdma_*.c/h -> drivers/net/t1/t1_*.c/h >>>> driver/net/qdma/qdma_access -> driver/net/t1/base >>> >>> Curious why "t1"? >>> What is the meaning? >> >> The hardware is commerically named as the "T1 card". T1 represents first card of the telco accelerator series. >> https://www.xilinx.com/publications/product-briefs/xilinx-t1-product-brief.pdf >> > > The discussion around this driver stalled. Either hardware abandoned, renamed, or there was no interest. > If you want to get this in a future DPDK release, please resubmit a new version addressing review comments. > Hi Stephen, I can confirm that this work is stalled, and it is not clear yet if there will be a new version or not, so OK to clean this from patchwork. Thanks, ferruh