Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/90082/?format=api
http://patchwork.dpdk.org/api/patches/90082/?format=api", "web_url": "http://patchwork.dpdk.org/project/dpdk/patch/20210330082212.707-2-pbhagavatula@marvell.com/", "project": { "id": 1, "url": "http://patchwork.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<20210330082212.707-2-pbhagavatula@marvell.com>", "list_archive_url": "https://inbox.dpdk.org/dev/20210330082212.707-2-pbhagavatula@marvell.com", "date": "2021-03-30T08:22:04", "name": "[v9,1/8] eventdev: introduce event vector capability", "commit_ref": null, "pull_url": null, "state": "superseded", "archived": true, "hash": "0ba12928bd7225cca61cfba6d9fa1c80949adbdb", "submitter": { "id": 1183, "url": "http://patchwork.dpdk.org/api/people/1183/?format=api", "name": "Pavan Nikhilesh Bhagavatula", "email": "pbhagavatula@marvell.com" }, "delegate": { "id": 310, "url": "http://patchwork.dpdk.org/api/users/310/?format=api", "username": "jerin", "first_name": "Jerin", "last_name": "Jacob", "email": "jerinj@marvell.com" }, "mbox": "http://patchwork.dpdk.org/project/dpdk/patch/20210330082212.707-2-pbhagavatula@marvell.com/mbox/", "series": [ { "id": 15971, "url": "http://patchwork.dpdk.org/api/series/15971/?format=api", "web_url": "http://patchwork.dpdk.org/project/dpdk/list/?series=15971", "date": "2021-03-30T08:22:03", "name": "Introduce event vectorization", "version": 9, "mbox": "http://patchwork.dpdk.org/series/15971/mbox/" } ], "comments": "http://patchwork.dpdk.org/api/patches/90082/comments/", "check": "success", "checks": "http://patchwork.dpdk.org/api/patches/90082/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 6CC44A034F;\n\tTue, 30 Mar 2021 10:22:33 +0200 (CEST)", "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 537F4140DA0;\n\tTue, 30 Mar 2021 10:22:33 +0200 (CEST)", "from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com\n [67.231.148.174])\n by mails.dpdk.org (Postfix) with ESMTP id 119A5140DA0\n for <dev@dpdk.org>; Tue, 30 Mar 2021 10:22:31 +0200 (CEST)", "from pps.filterd (m0045849.ppops.net [127.0.0.1])\n by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id\n 12U8KgBZ025691; Tue, 30 Mar 2021 01:22:28 -0700", "from dc5-exch02.marvell.com ([199.233.59.182])\n by mx0a-0016f401.pphosted.com with ESMTP id 37k63bcnkt-1\n (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT);\n Tue, 30 Mar 2021 01:22:28 -0700", "from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com\n (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2;\n Tue, 30 Mar 2021 01:22:26 -0700", "from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com\n (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend\n Transport; Tue, 30 Mar 2021 01:22:26 -0700", "from BG-LT7430.marvell.com (BG-LT7430.marvell.com [10.28.177.176])\n by maili.marvell.com (Postfix) with ESMTP id 4564A3F703F;\n Tue, 30 Mar 2021 01:22:21 -0700 (PDT)" ], "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com;\n h=from : to : cc :\n subject : date : message-id : in-reply-to : references : mime-version :\n content-transfer-encoding : content-type; s=pfpt0220;\n bh=YTifVG/lR1mRpB1E8AqtTP+/MXxIQZaMqE3OV8qDGUI=;\n b=X9t0TEGABbk7Fwyc/MksHB0DHtA0DgJ2rRQfUB5VDVyAwBfhmlNBPJJ+qBuBfMflRBJQ\n Vp7FX+COu680sgciSV5CmtVimtXQVvAd7GQb4Yfj3P7NWOl+QDgf3FB+HfvA88tmRqrc\n 3jmbFzzuPPtqdl9G20wQmTCMZAVip/1GwoJid67dnqpf6e+2Wy+N+o782Huzbj+eCb8j\n 4D5yEOmsQrVm/SUR006Z1A3x+dzTb2mX4agnFawLqJqLO/NaSPaiBvDaq7R2zgy5fapk\n X0qEBSUxDnAMCJjm+iSh9mH0zUAwQQZNvFX7BtrDvmom0yevlMcr9CDUk4pkDSTywxWr iQ==", "From": "<pbhagavatula@marvell.com>", "To": "<jerinj@marvell.com>, <jay.jayatheerthan@intel.com>,\n <erik.g.carrillo@intel.com>, <abhinandan.gujjar@intel.com>,\n <timothy.mcdaniel@intel.com>, <hemant.agrawal@nxp.com>,\n <harry.van.haaren@intel.com>, <mattias.ronnblom@ericsson.com>,\n <liang.j.ma@intel.com>, Ray Kinsella <mdr@ashroe.eu>, Neil Horman\n <nhorman@tuxdriver.com>", "CC": "<dev@dpdk.org>, Pavan Nikhilesh <pbhagavatula@marvell.com>", "Date": "Tue, 30 Mar 2021 13:52:04 +0530", "Message-ID": "<20210330082212.707-2-pbhagavatula@marvell.com>", "X-Mailer": "git-send-email 2.17.1", "In-Reply-To": "<20210330082212.707-1-pbhagavatula@marvell.com>", "References": "<20210326140850.7332-1-pbhagavatula@marvell.com>\n <20210330082212.707-1-pbhagavatula@marvell.com>", "MIME-Version": "1.0", "Content-Transfer-Encoding": "8bit", "Content-Type": "text/plain", "X-Proofpoint-ORIG-GUID": "DGzDsHWq_BqCzabp1kzSaT4AuS3WgftA", "X-Proofpoint-GUID": "DGzDsHWq_BqCzabp1kzSaT4AuS3WgftA", "X-Proofpoint-Virus-Version": "vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761\n definitions=2021-03-30_02:2021-03-26,\n 2021-03-30 signatures=0", "Subject": "[dpdk-dev] [PATCH v9 1/8] eventdev: introduce event vector\n capability", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "content": "From: Pavan Nikhilesh <pbhagavatula@marvell.com>\n\nIntroduce rte_event_vector datastructure which is capable of holding\nmultiple uintptr_t of the same flow thereby allowing applications\nto vectorize their pipeline and reducing the complexity of pipelining\nthe events across multiple stages.\nThis approach also reduces the scheduling overhead on a event device.\n\nAdd a event vector mempool create handler to create mempools based on\nthe best mempool ops available on a given platform.\n\nSigned-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>\nAcked-by: Jerin Jacob <jerinj@marvell.com>\nAcked-by: Ray Kinsella <mdr@ashroe.eu>\nAcked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>\n---\n doc/guides/prog_guide/eventdev.rst | 36 ++++++++++-\n doc/guides/rel_notes/release_21_05.rst | 8 +++\n lib/librte_eventdev/rte_eventdev.c | 42 +++++++++++++\n lib/librte_eventdev/rte_eventdev.h | 82 +++++++++++++++++++++++++-\n lib/librte_eventdev/version.map | 3 +\n 5 files changed, 168 insertions(+), 3 deletions(-)", "diff": "diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst\nindex ccde086f6..fda9c3743 100644\n--- a/doc/guides/prog_guide/eventdev.rst\n+++ b/doc/guides/prog_guide/eventdev.rst\n@@ -63,13 +63,45 @@ the actual event being scheduled is. The payload is a union of the following:\n * ``uint64_t u64``\n * ``void *event_ptr``\n * ``struct rte_mbuf *mbuf``\n+* ``struct rte_event_vector *vec``\n \n-These three items in a union occupy the same 64 bits at the end of the rte_event\n+These four items in a union occupy the same 64 bits at the end of the rte_event\n structure. The application can utilize the 64 bits directly by accessing the\n-u64 variable, while the event_ptr and mbuf are provided as convenience\n+u64 variable, while the event_ptr, mbuf, vec are provided as a convenience\n variables. For example the mbuf pointer in the union can used to schedule a\n DPDK packet.\n \n+Event Vector\n+~~~~~~~~~~~~\n+\n+The rte_event_vector struct contains a vector of elements defined by the event\n+type specified in the ``rte_event``. The event_vector structure contains the\n+following data:\n+\n+* ``nb_elem`` - The number of elements held within the vector.\n+\n+Similar to ``rte_event`` the payload of event vector is also a union, allowing\n+flexibility in what the actual vector is.\n+\n+* ``struct rte_mbuf *mbufs[0]`` - An array of mbufs.\n+* ``void *ptrs[0]`` - An array of pointers.\n+* ``uint64_t *u64s[0]`` - An array of uint64_t elements.\n+\n+The size of the event vector is related to the total number of elements it is\n+configured to hold, this is achieved by making `rte_event_vector` a variable\n+length structure.\n+A helper function is provided to create a mempool that holds event vector, which\n+takes name of the pool, total number of required ``rte_event_vector``,\n+cache size, number of elements in each ``rte_event_vector`` and socket id.\n+\n+.. code-block:: c\n+\n+ rte_event_vector_pool_create(\"vector_pool\", nb_event_vectors, cache_sz,\n+ nb_elements_per_vector, socket_id);\n+\n+The function ``rte_event_vector_pool_create`` creates mempool with the best\n+platform mempool ops.\n+\n Queues\n ~~~~~~\n \ndiff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst\nindex e2b0886a9..4cbf2769f 100644\n--- a/doc/guides/rel_notes/release_21_05.rst\n+++ b/doc/guides/rel_notes/release_21_05.rst\n@@ -106,6 +106,14 @@ New Features\n * Added support for periodic timer mode in eventdev timer adapter.\n * Added support for periodic timer mode in octeontx2 event device driver.\n \n+* **Add Event device vector capability.**\n+\n+ * Added ``rte_event_vector`` data structure which is capable of holding\n+ multiple ``uintptr_t`` of the same flow thereby allowing applications\n+ to vectorize their pipelines and also reduce the complexity of pipelining\n+ the events across multiple stages.\n+ * This also reduces the scheduling overhead on a event device.\n+\n \n Removed Items\n -------------\ndiff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c\nindex b57363f80..be0499c52 100644\n--- a/lib/librte_eventdev/rte_eventdev.c\n+++ b/lib/librte_eventdev/rte_eventdev.c\n@@ -1266,6 +1266,48 @@ int rte_event_dev_selftest(uint8_t dev_id)\n \treturn -ENOTSUP;\n }\n \n+struct rte_mempool *\n+rte_event_vector_pool_create(const char *name, unsigned int n,\n+\t\t\t unsigned int cache_size, uint16_t nb_elem,\n+\t\t\t int socket_id)\n+{\n+\tconst char *mp_ops_name;\n+\tstruct rte_mempool *mp;\n+\tunsigned int elt_sz;\n+\tint ret;\n+\n+\tif (!nb_elem) {\n+\t\tRTE_LOG(ERR, EVENTDEV,\n+\t\t\t\"Invalid number of elements=%d requested\\n\", nb_elem);\n+\t\trte_errno = EINVAL;\n+\t\treturn NULL;\n+\t}\n+\n+\telt_sz =\n+\t\tsizeof(struct rte_event_vector) + (nb_elem * sizeof(uintptr_t));\n+\tmp = rte_mempool_create_empty(name, n, elt_sz, cache_size, 0, socket_id,\n+\t\t\t\t 0);\n+\tif (mp == NULL)\n+\t\treturn NULL;\n+\n+\tmp_ops_name = rte_mbuf_best_mempool_ops();\n+\tret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL);\n+\tif (ret != 0) {\n+\t\tRTE_LOG(ERR, EVENTDEV, \"error setting mempool handler\\n\");\n+\t\tgoto err;\n+\t}\n+\n+\tret = rte_mempool_populate_default(mp);\n+\tif (ret < 0)\n+\t\tgoto err;\n+\n+\treturn mp;\n+err:\n+\trte_mempool_free(mp);\n+\trte_errno = -ret;\n+\treturn NULL;\n+}\n+\n int\n rte_event_dev_start(uint8_t dev_id)\n {\ndiff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h\nindex 9fc39e9ca..dee468ed0 100644\n--- a/lib/librte_eventdev/rte_eventdev.h\n+++ b/lib/librte_eventdev/rte_eventdev.h\n@@ -212,8 +212,10 @@ extern \"C\" {\n \n #include <rte_common.h>\n #include <rte_config.h>\n-#include <rte_memory.h>\n #include <rte_errno.h>\n+#include <rte_mbuf_pool_ops.h>\n+#include <rte_memory.h>\n+#include <rte_mempool.h>\n \n #include \"rte_eventdev_trace_fp.h\"\n \n@@ -913,6 +915,31 @@ rte_event_dev_stop_flush_callback_register(uint8_t dev_id,\n int\n rte_event_dev_close(uint8_t dev_id);\n \n+/**\n+ * Event vector structure.\n+ */\n+struct rte_event_vector {\n+\tuint64_t nb_elem : 16;\n+\t/**< Number of elements in this event vector. */\n+\tuint64_t rsvd : 48;\n+\t/**< Reserved for future use */\n+\tuint64_t impl_opaque;\n+\t/**< Implementation specific opaque value.\n+\t * An implementation may use this field to hold implementation specific\n+\t * value to share between dequeue and enqueue operation.\n+\t * The application should not modify this field.\n+\t */\n+\tunion {\n+\t\tstruct rte_mbuf *mbufs[0];\n+\t\tvoid *ptrs[0];\n+\t\tuint64_t *u64s[0];\n+\t} __rte_aligned(16);\n+\t/**< Start of the vector array union. Depending upon the event type the\n+\t * vector array can be an array of mbufs or pointers or opaque u64\n+\t * values.\n+\t */\n+};\n+\n /* Scheduler type definitions */\n #define RTE_SCHED_TYPE_ORDERED 0\n /**< Ordered scheduling\n@@ -986,6 +1013,21 @@ rte_event_dev_close(uint8_t dev_id);\n */\n #define RTE_EVENT_TYPE_ETH_RX_ADAPTER 0x4\n /**< The event generated from event eth Rx adapter */\n+#define RTE_EVENT_TYPE_VECTOR 0x8\n+/**< Indicates that event is a vector.\n+ * All vector event types should be a logical OR of EVENT_TYPE_VECTOR.\n+ * This simplifies the pipeline design as one can split processing the events\n+ * between vector events and normal event across event types.\n+ * Example:\n+ *\tif (ev.event_type & RTE_EVENT_TYPE_VECTOR) {\n+ *\t\t// Classify and handle vector event.\n+ *\t} else {\n+ *\t\t// Classify and handle event.\n+ *\t}\n+ */\n+#define RTE_EVENT_TYPE_CPU_VECTOR (RTE_EVENT_TYPE_VECTOR | RTE_EVENT_TYPE_CPU)\n+/**< The event vector generated from cpu for pipelining. */\n+\n #define RTE_EVENT_TYPE_MAX 0x10\n /**< Maximum number of event types */\n \n@@ -1108,6 +1150,8 @@ struct rte_event {\n \t\t/**< Opaque event pointer */\n \t\tstruct rte_mbuf *mbuf;\n \t\t/**< mbuf pointer if dequeued event is associated with mbuf */\n+\t\tstruct rte_event_vector *vec;\n+\t\t/**< Event vector pointer. */\n \t};\n };\n \n@@ -2026,6 +2070,42 @@ rte_event_dev_xstats_reset(uint8_t dev_id,\n */\n int rte_event_dev_selftest(uint8_t dev_id);\n \n+/**\n+ * Get the memory required per event vector based on the number of elements per\n+ * vector.\n+ * This should be used to create the mempool that holds the event vectors.\n+ *\n+ * @param name\n+ * The name of the vector pool.\n+ * @param n\n+ * The number of elements in the mbuf pool.\n+ * @param cache_size\n+ * Size of the per-core object cache. See rte_mempool_create() for\n+ * details.\n+ * @param nb_elem\n+ * The number of elements that a single event vector should be able to hold.\n+ * @param socket_id\n+ * The socket identifier where the memory should be allocated. The\n+ * value can be *SOCKET_ID_ANY* if there is no NUMA constraint for the\n+ * reserved zone\n+ *\n+ * @return\n+ * The pointer to the newly allocated mempool, on success. NULL on error\n+ * with rte_errno set appropriately. Possible rte_errno values include:\n+ * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure\n+ * - E_RTE_SECONDARY - function was called from a secondary process instance\n+ * - EINVAL - cache size provided is too large, or priv_size is not aligned.\n+ * - ENOSPC - the maximum number of memzones has already been allocated\n+ * - EEXIST - a memzone with the same name already exists\n+ * - ENOMEM - no appropriate memory area found in which to create memzone\n+ * - ENAMETOOLONG - mempool name requested is too long.\n+ */\n+__rte_experimental\n+struct rte_mempool *\n+rte_event_vector_pool_create(const char *name, unsigned int n,\n+\t\t\t unsigned int cache_size, uint16_t nb_elem,\n+\t\t\t int socket_id);\n+\n #ifdef __cplusplus\n }\n #endif\ndiff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map\nindex 3e5c09cfd..a070ef56e 100644\n--- a/lib/librte_eventdev/version.map\n+++ b/lib/librte_eventdev/version.map\n@@ -138,6 +138,9 @@ EXPERIMENTAL {\n \t__rte_eventdev_trace_port_setup;\n \t# added in 20.11\n \trte_event_pmd_pci_probe_named;\n+\n+\t#added in 21.05\n+\trte_event_vector_pool_create;\n };\n \n INTERNAL {\n", "prefixes": [ "v9", "1/8" ] }{ "id": 90082, "url": "