From patchwork Thu Feb 24 23:25:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 108330 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0DC58A034C; Fri, 25 Feb 2022 00:25:40 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0DA5441161; Fri, 25 Feb 2022 00:25:32 +0100 (CET) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2058.outbound.protection.outlook.com [40.107.237.58]) by mails.dpdk.org (Postfix) with ESMTP id 1047E4115B for ; Fri, 25 Feb 2022 00:25:30 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OYVSi0r4RVWm28ukh0gk31dYFZk3si5CimdZtUTQCOnGH+05nGOjvMn1rnezs+Sey+8R+0I+ArLlGzq24Q6SLLh5OL2MLz1k5YuwvPBtRsHc/d1/g6ZXY97WMQt4RUvrh7UeY0RxTEApltvH3QNfHYYgQq2XXl9eEH7RpYSLGpEOtoqG8SxEOCRH/s6lDYjROH9aNcBsZToU/6Yx/sRSY79ZxksPs2jm0d/80xZBxSIOx/d/MbHWjmG0TbwsFY+k5YrPW+U3ZIv0/Em2Y7JW+oTthx/OU2OuKJaHln7bPvem6/0JDD/ZnBHotVyxve6rc+H3J1JkzkUEwHc7FQ2alg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=U7Y8pFPOAN3P3YPrH9ZcIDiSGu6XyojHU+ml6CQt55U=; b=ftnm/eHaNlUiGIE289BXAJnC+qWi9pJz8QA55fXBh3IvpGqVZxnvQmTvFKVnzV4Vl9fCpecJBRKnHFSdmYHIVlQxCDYxXI6rJfN3LAABraOkJkGSkYcsz+gunJ6RMgXJ1f6ikmucH77sS3jm44X4kOKU49/BhmxAtN4jkna/npf06itt2eMoaZcGd/uCEPBY2Zgfn+2yNOI34zyLK8WKzeOHbPEiYn+vmqB3APBO3nOIvGydtRvS2IZaG/bxKzrer6Ar1tglo9iII2SbcMzlHERHz+2ziaKQWZc8Jam5xIDiC2VEAnHC4Z9IiZPQLpuq7vnp5G26uMPYwg964G3hkQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=U7Y8pFPOAN3P3YPrH9ZcIDiSGu6XyojHU+ml6CQt55U=; b=SRRsceRFpB4HeiH2+hCvEns6J5dLi24s5035gwA7GqTsftR07PWR3m1lTXCKfEA4hTjWPiH1ai0rChcUTpymo0GygE64jr6te9Y205TLxcXUA3tlkYpwvvZHkGR4J7BNz8WN1UolXFb+7cGeGWJ6v31NvCDlNIsI1rlO8lcbXyj9ggJPICv0FDGPr2/6XeCPGg+UZWEkRNLAsI+8ISWfq2Rp26qc47wntb80YnP81weWlL7sVOaw4r0QVOY9TPVx8zibXzKP0QhkvR4gubHZ/1iW/aD+PvtWPWcsp45KtGQryA4I9+PLypvZ693kjW4YM5uXjLZjcseJauF0twVCug== Received: from MW4PR03CA0270.namprd03.prod.outlook.com (2603:10b6:303:b4::35) by BN8PR12MB3585.namprd12.prod.outlook.com (2603:10b6:408:49::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.21; Thu, 24 Feb 2022 23:25:27 +0000 Received: from CO1NAM11FT012.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b4:cafe::9b) by MW4PR03CA0270.outlook.office365.com (2603:10b6:303:b4::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.21 via Frontend Transport; Thu, 24 Feb 2022 23:25:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by CO1NAM11FT012.mail.protection.outlook.com (10.13.175.192) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 23:25:27 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 23:25:26 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 24 Feb 2022 15:25:26 -0800 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9 via Frontend Transport; Thu, 24 Feb 2022 15:25:24 -0800 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Subject: [PATCH v3 5/6] net/mlx5: add external RxQ mapping API Date: Fri, 25 Feb 2022 01:25:10 +0200 Message-ID: <20220224232511.3238707-6-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220224232511.3238707-1-michaelba@nvidia.com> References: <20220223184835.3061161-1-michaelba@nvidia.com> <20220224232511.3238707-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a1ea0ea6-fbe7-4741-2d71-08d9f7ecf0db X-MS-TrafficTypeDiagnostic: BN8PR12MB3585:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: iPwpFyIEhmgA7r9pKTBEti2590Gyoapq2oi1uy93Ma7NFvxTJ42rY4Jlj1vJx5y83LvsBFi2e6+f7ICFwe8AA+0SIApH/Tiby7yjiYob07HOuyPo/M4oaAvWKwWHrf0f7orywECzSvf27DkyM653SuuywSoyPjhLIMq2qnKWvTsAm+oShgxoNYmoW/oOhQRjsC2fznHLa5z7IN8vjdg8KLxO4EDjiRr3JrDtjbYiFiCziGY7/hLbfX9s5TuOWUmi8cknZnw4zcBKp/iYPmJlc0/EIaiLwBYwI8OBWEzMck9FqRPBmfoUFe17LGo9Q8dUtQsWGEL2Ano6q28CLabgUY5E/+q+8zLpPyFBe2ssNUpY1sfv40Wm3xABATaE+Lz9R71trGcdENUR+N7voujuh3/ffh8oIMw5SY/kOJDIFyOLbbOlBJX6oyddmfD2j6Ppa1VoG23fRd9mBI8UGjXABm4zUW4JelgoIy7ORmQkQSBPKi/zKnTkVOB7b8hLEeLB2L2xkZh1X4eeU1WAA0MYLM+IADxtsB0+OI+J8tB7bZgRUlY90OQg57eU7z8SNzJmKVqpt3B/pUhSWatrSElpYtaCeKFpeJa4jyViZmKNgbDixd1nh6g7hbDFXgnFCtV4crUNaRPDb5mft0e/KiMuYnZQ37HN7bEAmACcuvaf8n0QSCZuSCJ1PV81kco/obRiK8QncOqu3iEuUUfFG91HNg== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(55016003)(4326008)(82310400004)(8676002)(86362001)(30864003)(81166007)(8936002)(356005)(2906002)(70206006)(1076003)(26005)(186003)(336012)(426003)(6286002)(5660300002)(2616005)(70586007)(316002)(6916009)(36860700001)(6666004)(7696005)(508600001)(40460700003)(47076005)(83380400001)(107886003)(36756003)(54906003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 23:25:27.3624 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a1ea0ea6-fbe7-4741-2d71-08d9f7ecf0db X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT012.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB3585 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org External queue is a queue that has been created and managed outside the PMD. The queues owner might use PMD to generate flow rules using these external queues. When the queue is created in hardware it is given an ID represented by 32 bits. In contrast, the index of the queues in PMD is represented by 16 bits. To enable the use of PMD to generate flow rules, the queue owner must provide a mapping between the HW index and a 16-bit index corresponding to the RTE Flow API. This patch adds an API enabling to insert/cancel a mapping between HW queue id and RTE Flow queue id. Signed-off-by: Michael Baum Acked-by: Matan Azrad --- drivers/net/mlx5/linux/mlx5_os.c | 17 +++++ drivers/net/mlx5/mlx5.c | 1 + drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_defs.h | 3 + drivers/net/mlx5/mlx5_ethdev.c | 16 ++++- drivers/net/mlx5/mlx5_rx.h | 6 ++ drivers/net/mlx5/mlx5_rxq.c | 117 +++++++++++++++++++++++++++++++ drivers/net/mlx5/rte_pmd_mlx5.h | 50 ++++++++++++- drivers/net/mlx5/version.map | 3 + 9 files changed, 210 insertions(+), 4 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 2e1606a733..a847ed13cc 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1158,6 +1158,22 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = ENOMEM; goto error; } + /* + * When user configures remote PD and CTX and device creates RxQ by + * DevX, external RxQ is both supported and requested. + */ + if (mlx5_imported_pd_and_ctx(sh->cdev) && mlx5_devx_obj_ops_en(sh)) { + priv->ext_rxqs = mlx5_malloc(MLX5_MEM_ZERO | MLX5_MEM_RTE, + sizeof(struct mlx5_external_rxq) * + MLX5_MAX_EXT_RX_QUEUES, 0, + SOCKET_ID_ANY); + if (priv->ext_rxqs == NULL) { + DRV_LOG(ERR, "Fail to allocate external RxQ array."); + err = ENOMEM; + goto error; + } + DRV_LOG(DEBUG, "External RxQ is supported."); + } priv->sh = sh; priv->dev_port = spawn->phys_port; priv->pci_dev = spawn->pci_dev; @@ -1617,6 +1633,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, mlx5_list_destroy(priv->hrxqs); if (eth_dev && priv->flex_item_map) mlx5_flex_item_port_cleanup(eth_dev); + mlx5_free(priv->ext_rxqs); mlx5_free(priv); if (eth_dev != NULL) eth_dev->data->dev_private = NULL; diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 7611fdd62b..5ecca2dd1b 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1930,6 +1930,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) dev->data->port_id); if (priv->hrxqs) mlx5_list_destroy(priv->hrxqs); + mlx5_free(priv->ext_rxqs); /* * Free the shared context in last turn, because the cleanup * routines above may use some shared fields, like diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index bd69aa2334..0f825396a2 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1461,6 +1461,7 @@ struct mlx5_priv { /* RX/TX queues. */ unsigned int rxqs_n; /* RX queues array size. */ unsigned int txqs_n; /* TX queues array size. */ + struct mlx5_external_rxq *ext_rxqs; /* External RX queues array. */ struct mlx5_rxq_priv *(*rxq_privs)[]; /* RX queue non-shared data. */ struct mlx5_txq_data *(*txqs)[]; /* TX queues. */ struct rte_mempool *mprq_mp; /* Mempool for Multi-Packet RQ. */ diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h index 2d48fde010..15728fb41f 100644 --- a/drivers/net/mlx5/mlx5_defs.h +++ b/drivers/net/mlx5/mlx5_defs.h @@ -175,6 +175,9 @@ /* Maximum number of indirect actions supported by rte_flow */ #define MLX5_MAX_INDIRECT_ACTIONS 3 +/* Maximum number of external Rx queues supported by rte_flow */ +#define MLX5_MAX_EXT_RX_QUEUES (UINT16_MAX - MLX5_EXTERNAL_RX_QUEUE_ID_MIN + 1) + /* * Linux definition of static_assert is found in /usr/include/assert.h. * Windows does not require a redefinition. diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index 406761ccf8..de0ba2b1ff 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -27,6 +27,7 @@ #include "mlx5_tx.h" #include "mlx5_autoconf.h" #include "mlx5_devx.h" +#include "rte_pmd_mlx5.h" /** * Get the interface index from device name. @@ -81,9 +82,10 @@ mlx5_dev_configure(struct rte_eth_dev *dev) rte_errno = EINVAL; return -rte_errno; } - priv->rss_conf.rss_key = - mlx5_realloc(priv->rss_conf.rss_key, MLX5_MEM_RTE, - MLX5_RSS_HASH_KEY_LEN, 0, SOCKET_ID_ANY); + priv->rss_conf.rss_key = mlx5_realloc(priv->rss_conf.rss_key, + MLX5_MEM_RTE, + MLX5_RSS_HASH_KEY_LEN, 0, + SOCKET_ID_ANY); if (!priv->rss_conf.rss_key) { DRV_LOG(ERR, "port %u cannot allocate RSS hash key memory (%u)", dev->data->port_id, rxqs_n); @@ -127,6 +129,14 @@ mlx5_dev_configure(struct rte_eth_dev *dev) rte_errno = EINVAL; return -rte_errno; } + if (priv->ext_rxqs && rxqs_n >= MLX5_EXTERNAL_RX_QUEUE_ID_MIN) { + DRV_LOG(ERR, "port %u cannot handle this many Rx queues (%u), " + "the maximal number of internal Rx queues is %u", + dev->data->port_id, rxqs_n, + MLX5_EXTERNAL_RX_QUEUE_ID_MIN - 1); + rte_errno = EINVAL; + return -rte_errno; + } if (rxqs_n != priv->rxqs_n) { DRV_LOG(INFO, "port %u Rx queues number update: %u -> %u", dev->data->port_id, priv->rxqs_n, rxqs_n); diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index fbc86dcef2..aba05dffa7 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -175,6 +175,12 @@ struct mlx5_rxq_priv { uint32_t hairpin_status; /* Hairpin binding status. */ }; +/* External RX queue descriptor. */ +struct mlx5_external_rxq { + uint32_t hw_id; /* Queue index in the Hardware. */ + uint32_t refcnt; /* Reference counter. */ +}; + /* mlx5_rxq.c */ extern uint8_t rss_hash_default_key[]; diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index e96584d55d..889428f48a 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -30,6 +30,7 @@ #include "mlx5_utils.h" #include "mlx5_autoconf.h" #include "mlx5_devx.h" +#include "rte_pmd_mlx5.h" /* Default RSS hash key also used for ConnectX-3. */ @@ -3008,3 +3009,119 @@ mlx5_rxq_timestamp_set(struct rte_eth_dev *dev) data->rt_timestamp = sh->dev_cap.rt_timestamp; } } + +/** + * Validate given external RxQ rte_plow index, and get pointer to concurrent + * external RxQ object to map/unmap. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] dpdk_idx + * Queue index in rte_flow. + * + * @return + * Pointer to concurrent external RxQ on success, + * NULL otherwise and rte_errno is set. + */ +static struct mlx5_external_rxq * +mlx5_external_rx_queue_get_validate(uint16_t port_id, uint16_t dpdk_idx) +{ + struct rte_eth_dev *dev; + struct mlx5_priv *priv; + + if (dpdk_idx < MLX5_EXTERNAL_RX_QUEUE_ID_MIN) { + DRV_LOG(ERR, "Queue index %u should be in range: [%u, %u].", + dpdk_idx, MLX5_EXTERNAL_RX_QUEUE_ID_MIN, UINT16_MAX); + rte_errno = EINVAL; + return NULL; + } + if (rte_eth_dev_is_valid_port(port_id) < 0) { + DRV_LOG(ERR, "There is no Ethernet device for port %u.", + port_id); + rte_errno = ENODEV; + return NULL; + } + dev = &rte_eth_devices[port_id]; + priv = dev->data->dev_private; + if (!mlx5_imported_pd_and_ctx(priv->sh->cdev)) { + DRV_LOG(ERR, "Port %u " + "external RxQ isn't supported on local PD and CTX.", + port_id); + rte_errno = ENOTSUP; + return NULL; + } + if (!mlx5_devx_obj_ops_en(priv->sh)) { + DRV_LOG(ERR, + "Port %u external RxQ isn't supported by Verbs API.", + port_id); + rte_errno = ENOTSUP; + return NULL; + } + /* + * When user configures remote PD and CTX and device creates RxQ by + * DevX, external RxQs array is allocated. + */ + MLX5_ASSERT(priv->ext_rxqs != NULL); + return &priv->ext_rxqs[dpdk_idx - MLX5_EXTERNAL_RX_QUEUE_ID_MIN]; +} + +int +rte_pmd_mlx5_external_rx_queue_id_map(uint16_t port_id, uint16_t dpdk_idx, + uint32_t hw_idx) +{ + struct mlx5_external_rxq *ext_rxq; + uint32_t unmapped = 0; + + ext_rxq = mlx5_external_rx_queue_get_validate(port_id, dpdk_idx); + if (ext_rxq == NULL) + return -rte_errno; + if (!__atomic_compare_exchange_n(&ext_rxq->refcnt, &unmapped, 1, false, + __ATOMIC_RELAXED, __ATOMIC_RELAXED)) { + if (ext_rxq->hw_id != hw_idx) { + DRV_LOG(ERR, "Port %u external RxQ index %u " + "is already mapped to HW index (requesting is " + "%u, existing is %u).", + port_id, dpdk_idx, hw_idx, ext_rxq->hw_id); + rte_errno = EEXIST; + return -rte_errno; + } + DRV_LOG(WARNING, "Port %u external RxQ index %u " + "is already mapped to the requested HW index (%u)", + port_id, dpdk_idx, hw_idx); + + } else { + ext_rxq->hw_id = hw_idx; + DRV_LOG(DEBUG, "Port %u external RxQ index %u " + "is successfully mapped to the requested HW index (%u)", + port_id, dpdk_idx, hw_idx); + } + return 0; +} + +int +rte_pmd_mlx5_external_rx_queue_id_unmap(uint16_t port_id, uint16_t dpdk_idx) +{ + struct mlx5_external_rxq *ext_rxq; + uint32_t mapped = 1; + + ext_rxq = mlx5_external_rx_queue_get_validate(port_id, dpdk_idx); + if (ext_rxq == NULL) + return -rte_errno; + if (ext_rxq->refcnt > 1) { + DRV_LOG(ERR, "Port %u external RxQ index %u still referenced.", + port_id, dpdk_idx); + rte_errno = EINVAL; + return -rte_errno; + } + if (!__atomic_compare_exchange_n(&ext_rxq->refcnt, &mapped, 0, false, + __ATOMIC_RELAXED, __ATOMIC_RELAXED)) { + DRV_LOG(ERR, "Port %u external RxQ index %u doesn't exist.", + port_id, dpdk_idx); + rte_errno = EINVAL; + return -rte_errno; + } + DRV_LOG(DEBUG, + "Port %u external RxQ index %u is successfully unmapped.", + port_id, dpdk_idx); + return 0; +} diff --git a/drivers/net/mlx5/rte_pmd_mlx5.h b/drivers/net/mlx5/rte_pmd_mlx5.h index fc37a386db..6e7907ee59 100644 --- a/drivers/net/mlx5/rte_pmd_mlx5.h +++ b/drivers/net/mlx5/rte_pmd_mlx5.h @@ -61,8 +61,56 @@ int rte_pmd_mlx5_get_dyn_flag_names(char *names[], unsigned int n); __rte_experimental int rte_pmd_mlx5_sync_flow(uint16_t port_id, uint32_t domains); +/** + * External Rx queue rte_flow index minimal value. + */ +#define MLX5_EXTERNAL_RX_QUEUE_ID_MIN (UINT16_MAX - 1000 + 1) + +/** + * Update mapping between rte_flow queue index (16 bits) and HW queue index (32 + * bits) for RxQs which is created outside the PMD. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] dpdk_idx + * Queue index in rte_flow. + * @param[in] hw_idx + * Queue index in hardware. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + * Possible values for rte_errno: + * - EEXIST - a mapping with the same rte_flow index already exists. + * - EINVAL - invalid rte_flow index, out of range. + * - ENODEV - there is no Ethernet device for this port id. + * - ENOTSUP - the port doesn't support external RxQ. + */ +__rte_experimental +int rte_pmd_mlx5_external_rx_queue_id_map(uint16_t port_id, uint16_t dpdk_idx, + uint32_t hw_idx); + +/** + * Remove mapping between rte_flow queue index (16 bits) and HW queue index (32 + * bits) for RxQs which is created outside the PMD. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] dpdk_idx + * Queue index in rte_flow. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + * Possible values for rte_errno: + * - EINVAL - invalid index, out of range, still referenced or doesn't exist. + * - ENODEV - there is no Ethernet device for this port id. + * - ENOTSUP - the port doesn't support external RxQ. + */ +__rte_experimental +int rte_pmd_mlx5_external_rx_queue_id_unmap(uint16_t port_id, + uint16_t dpdk_idx); + #ifdef __cplusplus } #endif -#endif +#endif /* RTE_PMD_PRIVATE_MLX5_H_ */ diff --git a/drivers/net/mlx5/version.map b/drivers/net/mlx5/version.map index 0af7a12488..79cb79acc6 100644 --- a/drivers/net/mlx5/version.map +++ b/drivers/net/mlx5/version.map @@ -9,4 +9,7 @@ EXPERIMENTAL { rte_pmd_mlx5_get_dyn_flag_names; # added in 20.11 rte_pmd_mlx5_sync_flow; + # added in 22.03 + rte_pmd_mlx5_external_rx_queue_id_map; + rte_pmd_mlx5_external_rx_queue_id_unmap; };