Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/129333/?format=api
http://patchwork.dpdk.org/api/patches/129333/?format=api", "web_url": "http://patchwork.dpdk.org/project/dpdk/patch/20230706095004.1848199-2-feifei.wang2@arm.com/", "project": { "id": 1, "url": "http://patchwork.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<20230706095004.1848199-2-feifei.wang2@arm.com>", "list_archive_url": "https://inbox.dpdk.org/dev/20230706095004.1848199-2-feifei.wang2@arm.com", "date": "2023-07-06T09:50:01", "name": "[v7,1/4] ethdev: add API for mbufs recycle mode", "commit_ref": null, "pull_url": null, "state": "superseded", "archived": true, "hash": "2ce299558829cc259f262cc2d29ce001411a8ed7", "submitter": { "id": 1771, "url": "http://patchwork.dpdk.org/api/people/1771/?format=api", "name": "Feifei Wang", "email": "feifei.wang2@arm.com" }, "delegate": { "id": 319, "url": "http://patchwork.dpdk.org/api/users/319/?format=api", "username": "fyigit", "first_name": "Ferruh", "last_name": "Yigit", "email": "ferruh.yigit@amd.com" }, "mbox": "http://patchwork.dpdk.org/project/dpdk/patch/20230706095004.1848199-2-feifei.wang2@arm.com/mbox/", "series": [ { "id": 28857, "url": "http://patchwork.dpdk.org/api/series/28857/?format=api", "web_url": "http://patchwork.dpdk.org/project/dpdk/list/?series=28857", "date": "2023-07-06T09:50:00", "name": "Recycle mbufs from Tx queue to Rx queue", "version": 7, "mbox": "http://patchwork.dpdk.org/series/28857/mbox/" } ], "comments": "http://patchwork.dpdk.org/api/patches/129333/comments/", "check": "success", "checks": "http://patchwork.dpdk.org/api/patches/129333/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id A4DC442DE6;\n\tThu, 6 Jul 2023 11:50:19 +0200 (CEST)", "from mails.dpdk.org (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 57C6242DCC;\n\tThu, 6 Jul 2023 11:50:18 +0200 (CEST)", "from foss.arm.com (foss.arm.com [217.140.110.172])\n by mails.dpdk.org (Postfix) with ESMTP id 8937542DC8\n for <dev@dpdk.org>; Thu, 6 Jul 2023 11:50:17 +0200 (CEST)", "from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])\n by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 24C4DC14;\n Thu, 6 Jul 2023 02:50:59 -0700 (PDT)", "from net-x86-dell-8268.shanghai.arm.com\n (net-x86-dell-8268.shanghai.arm.com [10.169.210.116])\n by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id F1BAB3F663;\n Thu, 6 Jul 2023 02:50:13 -0700 (PDT)" ], "From": "Feifei Wang <feifei.wang2@arm.com>", "To": "Thomas Monjalon <thomas@monjalon.net>,\n Ferruh Yigit <ferruh.yigit@amd.com>,\n Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>", "Cc": "dev@dpdk.org, konstantin.v.ananyev@yandex.ru, mb@smartsharesystems.com,\n nd@arm.com, Feifei Wang <feifei.wang2@arm.com>,\n Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>,\n Ruifeng Wang <ruifeng.wang@arm.com>", "Subject": "[PATCH v7 1/4] ethdev: add API for mbufs recycle mode", "Date": "Thu, 6 Jul 2023 17:50:01 +0800", "Message-Id": "<20230706095004.1848199-2-feifei.wang2@arm.com>", "X-Mailer": "git-send-email 2.25.1", "In-Reply-To": "<20230706095004.1848199-1-feifei.wang2@arm.com>", "References": "<20230706095004.1848199-1-feifei.wang2@arm.com>", "MIME-Version": "1.0", "Content-Transfer-Encoding": "8bit", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org" }, "content": "Add 'rte_eth_recycle_rx_queue_info_get' and 'rte_eth_recycle_mbufs'\nAPIs to recycle used mbufs from a transmit queue of an Ethernet device,\nand move these mbufs into a mbuf ring for a receive queue of an Ethernet\ndevice. This can bypass mempool 'put/get' operations hence saving CPU\ncycles.\n\nFor each recycling mbufs, the rte_eth_recycle_mbufs() function performs\nthe following operations:\n- Copy used *rte_mbuf* buffer pointers from Tx mbuf ring into Rx mbuf\nring.\n- Replenish the Rx descriptors with the recycling *rte_mbuf* mbufs freed\nfrom the Tx mbuf ring.\n\nSuggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>\nSuggested-by: Ruifeng Wang <ruifeng.wang@arm.com>\nSigned-off-by: Feifei Wang <feifei.wang2@arm.com>\nReviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>\nReviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>\n---\n doc/guides/rel_notes/release_23_07.rst | 7 +\n lib/ethdev/ethdev_driver.h | 10 ++\n lib/ethdev/ethdev_private.c | 2 +\n lib/ethdev/rte_ethdev.c | 31 +++++\n lib/ethdev/rte_ethdev.h | 181 +++++++++++++++++++++++++\n lib/ethdev/rte_ethdev_core.h | 23 +++-\n lib/ethdev/version.map | 2 +\n 7 files changed, 250 insertions(+), 6 deletions(-)", "diff": "diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst\nindex 4459144140..7402262f22 100644\n--- a/doc/guides/rel_notes/release_23_07.rst\n+++ b/doc/guides/rel_notes/release_23_07.rst\n@@ -200,6 +200,13 @@ New Features\n \n Enhanced the GRO library to support TCP packets over IPv6 network.\n \n+* **Add mbufs recycling support. **\n+\n+ Added ``rte_eth_recycle_rx_queue_info_get`` and ``rte_eth_recycle_mbufs``\n+ APIs which allow the user to copy used mbufs from the Tx mbuf ring\n+ into the Rx mbuf ring. This feature supports the case that the Rx Ethernet\n+ device is different from the Tx Ethernet device with respective driver\n+ callback functions in ``rte_eth_recycle_mbufs``.\n \n Removed Items\n -------------\ndiff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h\nindex 980f837ab6..b0c55a8523 100644\n--- a/lib/ethdev/ethdev_driver.h\n+++ b/lib/ethdev/ethdev_driver.h\n@@ -58,6 +58,10 @@ struct rte_eth_dev {\n \teth_rx_descriptor_status_t rx_descriptor_status;\n \t/** Check the status of a Tx descriptor */\n \teth_tx_descriptor_status_t tx_descriptor_status;\n+\t/** Pointer to PMD transmit mbufs reuse function */\n+\teth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;\n+\t/** Pointer to PMD receive descriptors refill function */\n+\teth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;\n \n \t/**\n \t * Device data that is shared between primary and secondary processes\n@@ -507,6 +511,10 @@ typedef void (*eth_rxq_info_get_t)(struct rte_eth_dev *dev,\n typedef void (*eth_txq_info_get_t)(struct rte_eth_dev *dev,\n \tuint16_t tx_queue_id, struct rte_eth_txq_info *qinfo);\n \n+typedef void (*eth_recycle_rxq_info_get_t)(struct rte_eth_dev *dev,\n+\tuint16_t rx_queue_id,\n+\tstruct rte_eth_recycle_rxq_info *recycle_rxq_info);\n+\n typedef int (*eth_burst_mode_get_t)(struct rte_eth_dev *dev,\n \tuint16_t queue_id, struct rte_eth_burst_mode *mode);\n \n@@ -1250,6 +1258,8 @@ struct eth_dev_ops {\n \teth_rxq_info_get_t rxq_info_get;\n \t/** Retrieve Tx queue information */\n \teth_txq_info_get_t txq_info_get;\n+\t/** Retrieve mbufs recycle Rx queue information */\n+\teth_recycle_rxq_info_get_t recycle_rxq_info_get;\n \teth_burst_mode_get_t rx_burst_mode_get; /**< Get Rx burst mode */\n \teth_burst_mode_get_t tx_burst_mode_get; /**< Get Tx burst mode */\n \teth_fw_version_get_t fw_version_get; /**< Get firmware version */\ndiff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c\nindex 14ec8c6ccf..f8ab64f195 100644\n--- a/lib/ethdev/ethdev_private.c\n+++ b/lib/ethdev/ethdev_private.c\n@@ -277,6 +277,8 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,\n \tfpo->rx_queue_count = dev->rx_queue_count;\n \tfpo->rx_descriptor_status = dev->rx_descriptor_status;\n \tfpo->tx_descriptor_status = dev->tx_descriptor_status;\n+\tfpo->recycle_tx_mbufs_reuse = dev->recycle_tx_mbufs_reuse;\n+\tfpo->recycle_rx_descriptors_refill = dev->recycle_rx_descriptors_refill;\n \n \tfpo->rxq.data = dev->data->rx_queues;\n \tfpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;\ndiff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c\nindex 0840d2b594..ea89a101a1 100644\n--- a/lib/ethdev/rte_ethdev.c\n+++ b/lib/ethdev/rte_ethdev.c\n@@ -5876,6 +5876,37 @@ rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,\n \treturn 0;\n }\n \n+int\n+rte_eth_recycle_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,\n+\t\tstruct rte_eth_recycle_rxq_info *recycle_rxq_info)\n+{\n+\tstruct rte_eth_dev *dev;\n+\n+\tRTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);\n+\tdev = &rte_eth_devices[port_id];\n+\n+\tif (queue_id >= dev->data->nb_rx_queues) {\n+\t\tRTE_ETHDEV_LOG(ERR, \"Invalid Rx queue_id=%u\\n\", queue_id);\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (dev->data->rx_queues == NULL ||\n+\t\t\tdev->data->rx_queues[queue_id] == NULL) {\n+\t\tRTE_ETHDEV_LOG(ERR,\n+\t\t\t \"Rx queue %\"PRIu16\" of device with port_id=%\"\n+\t\t\t PRIu16\" has not been setup\\n\",\n+\t\t\t queue_id, port_id);\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (*dev->dev_ops->recycle_rxq_info_get == NULL)\n+\t\treturn -ENOTSUP;\n+\n+\tdev->dev_ops->recycle_rxq_info_get(dev, queue_id, recycle_rxq_info);\n+\n+\treturn 0;\n+}\n+\n int\n rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id,\n \t\t\t struct rte_eth_burst_mode *mode)\ndiff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h\nindex 3d44979b44..811bce243b 100644\n--- a/lib/ethdev/rte_ethdev.h\n+++ b/lib/ethdev/rte_ethdev.h\n@@ -1820,6 +1820,30 @@ struct rte_eth_txq_info {\n \tuint8_t queue_state; /**< one of RTE_ETH_QUEUE_STATE_*. */\n } __rte_cache_min_aligned;\n \n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this structure may change without prior notice.\n+ *\n+ * Ethernet device Rx queue information structure for recycling mbufs.\n+ * Used to retrieve Rx queue information when Tx queue reusing mbufs and moving\n+ * them into Rx mbuf ring.\n+ */\n+struct rte_eth_recycle_rxq_info {\n+\tstruct rte_mbuf **mbuf_ring; /**< mbuf ring of Rx queue. */\n+\tstruct rte_mempool *mp; /**< mempool of Rx queue. */\n+\tuint16_t *refill_head; /**< head of Rx queue refilling mbufs. */\n+\tuint16_t *receive_tail; /**< tail of Rx queue receiving pkts. */\n+\tuint16_t mbuf_ring_size; /**< configured number of mbuf ring size. */\n+\t/**\n+\t * Requirement on mbuf refilling batch size of Rx mbuf ring.\n+\t * For some PMD drivers, the number of Rx mbuf ring refilling mbufs\n+\t * should be aligned with mbuf ring size, in order to simplify\n+\t * ring wrapping around.\n+\t * Value 0 means that PMD drivers have no requirement for this.\n+\t */\n+\tuint16_t refill_requirement;\n+} __rte_cache_min_aligned;\n+\n /* Generic Burst mode flag definition, values can be ORed. */\n \n /**\n@@ -4852,6 +4876,31 @@ int rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,\n int rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,\n \tstruct rte_eth_txq_info *qinfo);\n \n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice\n+ *\n+ * Retrieve information about given ports's Rx queue for recycling mbufs.\n+ *\n+ * @param port_id\n+ * The port identifier of the Ethernet device.\n+ * @param queue_id\n+ * The Rx queue on the Ethernet devicefor which information\n+ * will be retrieved.\n+ * @param recycle_rxq_info\n+ * A pointer to a structure of type *rte_eth_recycle_rxq_info* to be filled.\n+ *\n+ * @return\n+ * - 0: Success\n+ * - -ENODEV: If *port_id* is invalid.\n+ * - -ENOTSUP: routine is not supported by the device PMD.\n+ * - -EINVAL: The queue_id is out of range.\n+ */\n+__rte_experimental\n+int rte_eth_recycle_rx_queue_info_get(uint16_t port_id,\n+\t\tuint16_t queue_id,\n+\t\tstruct rte_eth_recycle_rxq_info *recycle_rxq_info);\n+\n /**\n * Retrieve information about the Rx packet burst mode.\n *\n@@ -6526,6 +6575,138 @@ rte_eth_tx_buffer(uint16_t port_id, uint16_t queue_id,\n \treturn rte_eth_tx_buffer_flush(port_id, queue_id, buffer);\n }\n \n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice\n+ *\n+ * Recycle used mbufs from a transmit queue of an Ethernet device, and move\n+ * these mbufs into a mbuf ring for a receive queue of an Ethernet device.\n+ * This can bypass mempool path to save CPU cycles.\n+ *\n+ * The rte_eth_recycle_mbufs() function loops, with rte_eth_rx_burst() and\n+ * rte_eth_tx_burst() functions, freeing Tx used mbufs and replenishing Rx\n+ * descriptors. The number of recycling mbufs depends on the request of Rx mbuf\n+ * ring, with the constraint of enough used mbufs from Tx mbuf ring.\n+ *\n+ * For each recycling mbufs, the rte_eth_recycle_mbufs() function performs the\n+ * following operations:\n+ *\n+ * - Copy used *rte_mbuf* buffer pointers from Tx mbuf ring into Rx mbuf ring.\n+ *\n+ * - Replenish the Rx descriptors with the recycling *rte_mbuf* mbufs freed\n+ * from the Tx mbuf ring.\n+ *\n+ * This function spilts Rx and Tx path with different callback functions. The\n+ * callback function recycle_tx_mbufs_reuse is for Tx driver. The callback\n+ * function recycle_rx_descriptors_refill is for Rx driver. rte_eth_recycle_mbufs()\n+ * can support the case that Rx Ethernet device is different from Tx Ethernet device.\n+ *\n+ * It is the responsibility of users to select the Rx/Tx queue pair to recycle\n+ * mbufs. Before call this function, users must call rte_eth_recycle_rxq_info_get\n+ * function to retrieve selected Rx queue information.\n+ * @see rte_eth_recycle_rxq_info_get, struct rte_eth_recycle_rxq_info\n+ *\n+ * Currently, the rte_eth_recycle_mbufs() function can support to feed 1 Rx queue from\n+ * 2 Tx queues in the same thread. Do not pair the Rx queue and Tx queue in different\n+ * threads, in order to avoid memory error rewriting.\n+ *\n+ * @param rx_port_id\n+ * Port identifying the receive side.\n+ * @param rx_queue_id\n+ * The index of the receive queue identifying the receive side.\n+ * The value must be in the range [0, nb_rx_queue - 1] previously supplied\n+ * to rte_eth_dev_configure().\n+ * @param tx_port_id\n+ * Port identifying the transmit side.\n+ * @param tx_queue_id\n+ * The index of the transmit queue identifying the transmit side.\n+ * The value must be in the range [0, nb_tx_queue - 1] previously supplied\n+ * to rte_eth_dev_configure().\n+ * @param recycle_rxq_info\n+ * A pointer to a structure of type *rte_eth_recycle_rxq_info* which contains\n+ * the information of the Rx queue mbuf ring.\n+ * @return\n+ * The number of recycling mbufs.\n+ */\n+__rte_experimental\n+static inline uint16_t\n+rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id,\n+\t\tuint16_t tx_port_id, uint16_t tx_queue_id,\n+\t\tstruct rte_eth_recycle_rxq_info *recycle_rxq_info)\n+{\n+\tstruct rte_eth_fp_ops *p;\n+\tvoid *qd;\n+\tuint16_t nb_mbufs;\n+\n+#ifdef RTE_ETHDEV_DEBUG_TX\n+\tif (tx_port_id >= RTE_MAX_ETHPORTS ||\n+\t\t\ttx_queue_id >= RTE_MAX_QUEUES_PER_PORT) {\n+\t\tRTE_ETHDEV_LOG(ERR,\n+\t\t\t\t\"Invalid tx_port_id=%u or tx_queue_id=%u\\n\",\n+\t\t\t\ttx_port_id, tx_queue_id);\n+\t\treturn 0;\n+\t}\n+#endif\n+\n+\t/* fetch pointer to queue data */\n+\tp = &rte_eth_fp_ops[tx_port_id];\n+\tqd = p->txq.data[tx_queue_id];\n+\n+#ifdef RTE_ETHDEV_DEBUG_TX\n+\tRTE_ETH_VALID_PORTID_OR_ERR_RET(tx_port_id, 0);\n+\n+\tif (qd == NULL) {\n+\t\tRTE_ETHDEV_LOG(ERR, \"Invalid Tx queue_id=%u for port_id=%u\\n\",\n+\t\t\t\ttx_queue_id, tx_port_id);\n+\t\treturn 0;\n+\t}\n+#endif\n+\tif (p->recycle_tx_mbufs_reuse == NULL)\n+\t\treturn 0;\n+\n+\t/* Copy used *rte_mbuf* buffer pointers from Tx mbuf ring\n+\t * into Rx mbuf ring.\n+\t */\n+\tnb_mbufs = p->recycle_tx_mbufs_reuse(qd, recycle_rxq_info);\n+\n+\t/* If no recycling mbufs, return 0. */\n+\tif (nb_mbufs == 0)\n+\t\treturn 0;\n+\n+#ifdef RTE_ETHDEV_DEBUG_RX\n+\tif (rx_port_id >= RTE_MAX_ETHPORTS ||\n+\t\t\trx_queue_id >= RTE_MAX_QUEUES_PER_PORT) {\n+\t\tRTE_ETHDEV_LOG(ERR, \"Invalid rx_port_id=%u or rx_queue_id=%u\\n\",\n+\t\t\t\trx_port_id, rx_queue_id);\n+\t\treturn 0;\n+\t}\n+#endif\n+\n+\t/* fetch pointer to queue data */\n+\tp = &rte_eth_fp_ops[rx_port_id];\n+\tqd = p->rxq.data[rx_queue_id];\n+\n+#ifdef RTE_ETHDEV_DEBUG_RX\n+\tRTE_ETH_VALID_PORTID_OR_ERR_RET(rx_port_id, 0);\n+\n+\tif (qd == NULL) {\n+\t\tRTE_ETHDEV_LOG(ERR, \"Invalid Rx queue_id=%u for port_id=%u\\n\",\n+\t\t\t\trx_queue_id, rx_port_id);\n+\t\treturn 0;\n+\t}\n+#endif\n+\n+\tif (p->recycle_rx_descriptors_refill == NULL)\n+\t\treturn 0;\n+\n+\t/* Replenish the Rx descriptors with the recycling\n+\t * into Rx mbuf ring.\n+\t */\n+\tp->recycle_rx_descriptors_refill(qd, nb_mbufs);\n+\n+\treturn nb_mbufs;\n+}\n+\n /**\n * @warning\n * @b EXPERIMENTAL: this API may change without prior notice\ndiff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h\nindex 46e9721e07..a24ad7a6b2 100644\n--- a/lib/ethdev/rte_ethdev_core.h\n+++ b/lib/ethdev/rte_ethdev_core.h\n@@ -55,6 +55,13 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);\n /** @internal Check the status of a Tx descriptor */\n typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);\n \n+/** @internal Copy used mbufs from Tx mbuf ring into Rx mbuf ring */\n+typedef uint16_t (*eth_recycle_tx_mbufs_reuse_t)(void *txq,\n+\t\tstruct rte_eth_recycle_rxq_info *recycle_rxq_info);\n+\n+/** @internal Refill Rx descriptors with the recycling mbufs */\n+typedef void (*eth_recycle_rx_descriptors_refill_t)(void *rxq, uint16_t nb);\n+\n /**\n * @internal\n * Structure used to hold opaque pointers to internal ethdev Rx/Tx\n@@ -83,15 +90,17 @@ struct rte_eth_fp_ops {\n \t * Rx fast-path functions and related data.\n \t * 64-bit systems: occupies first 64B line\n \t */\n+\t/** Rx queues data. */\n+\tstruct rte_ethdev_qdata rxq;\n \t/** PMD receive function. */\n \teth_rx_burst_t rx_pkt_burst;\n \t/** Get the number of used Rx descriptors. */\n \teth_rx_queue_count_t rx_queue_count;\n \t/** Check the status of a Rx descriptor. */\n \teth_rx_descriptor_status_t rx_descriptor_status;\n-\t/** Rx queues data. */\n-\tstruct rte_ethdev_qdata rxq;\n-\tuintptr_t reserved1[3];\n+\t/** Refill Rx descriptors with the recycling mbufs. */\n+\teth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;\n+\tuintptr_t reserved1[2];\n \t/**@}*/\n \n \t/**@{*/\n@@ -99,15 +108,17 @@ struct rte_eth_fp_ops {\n \t * Tx fast-path functions and related data.\n \t * 64-bit systems: occupies second 64B line\n \t */\n+\t/** Tx queues data. */\n+\tstruct rte_ethdev_qdata txq;\n \t/** PMD transmit function. */\n \teth_tx_burst_t tx_pkt_burst;\n \t/** PMD transmit prepare function. */\n \teth_tx_prep_t tx_pkt_prepare;\n \t/** Check the status of a Tx descriptor. */\n \teth_tx_descriptor_status_t tx_descriptor_status;\n-\t/** Tx queues data. */\n-\tstruct rte_ethdev_qdata txq;\n-\tuintptr_t reserved2[3];\n+\t/** Copy used mbufs from Tx mbuf ring into Rx. */\n+\teth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;\n+\tuintptr_t reserved2[2];\n \t/**@}*/\n \n } __rte_cache_aligned;\ndiff --git a/lib/ethdev/version.map b/lib/ethdev/version.map\nindex fc492ee839..a51ae4a5af 100644\n--- a/lib/ethdev/version.map\n+++ b/lib/ethdev/version.map\n@@ -312,6 +312,8 @@ EXPERIMENTAL {\n \trte_flow_async_action_list_handle_query_update;\n \trte_flow_async_actions_update;\n \trte_flow_restore_info_dynflag;\n+\trte_eth_recycle_mbufs;\n+\trte_eth_recycle_rx_queue_info_get;\n };\n \n INTERNAL {\n", "prefixes": [ "v7", "1/4" ] }{ "id": 129333, "url": "