From patchwork Fri Jan 20 10:21:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Volodymyr Fialko X-Patchwork-Id: 122416 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 65D984242B; Fri, 20 Jan 2023 11:22:16 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 53D6A4114B; Fri, 20 Jan 2023 11:22:13 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id D8A0B400D5 for ; Fri, 20 Jan 2023 11:22:08 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 30K7NYEk016078; Fri, 20 Jan 2023 02:22:08 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Fa5cgNSUlKsuhxHA7QxbwWpFPyNaB9aMXqdn5vqOero=; b=DRB1DUuEeXdimtKv1L+VleZlp/Vx2m+a9jO+RnzBzE8P63VtKn0oylMLttnlNHHF/lyy zXmQInocGfEsWu8Sb8bjzATCu4MnLS76aBFcr+A4vZK39yVRdyGvKszDh3rQadT4tiOF sof2rQTYDI9PXO+aYAfu6aMuWWJXMM5nyjkUTv3C4O2qQeFnXNqoOxWdFk/F0s5tbQ5j vv9nbrIs/d8myL/rfzOvH4Q2tDOsta/UlEl+1i/+qyBrHdUiyMdwaHnWD6gb9ZCIEad3 hUMXpDFvRGR4pppRknw55WWqZgXAWppV4OdeBvRCgFhiyoq1FQiOrZJME4xpE2mPG/eR Mg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3n71cexxm9-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 20 Jan 2023 02:22:08 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Fri, 20 Jan 2023 02:21:59 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.42 via Frontend Transport; Fri, 20 Jan 2023 02:21:59 -0800 Received: from cavium-DT10.. (unknown [10.28.34.39]) by maili.marvell.com (Postfix) with ESMTP id 6A7193F7074; Fri, 20 Jan 2023 02:21:57 -0800 (PST) From: Volodymyr Fialko To: , Reshma Pattan CC: , , Volodymyr Fialko Subject: [PATCH 1/3] reorder: add new drain up to seq number API Date: Fri, 20 Jan 2023 11:21:44 +0100 Message-ID: <20230120102146.4035460-2-vfialko@marvell.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230120102146.4035460-1-vfialko@marvell.com> References: <20230120102146.4035460-1-vfialko@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 9xQClv-J2GbBSww_1iZJBLmH72MrImp1 X-Proofpoint-ORIG-GUID: 9xQClv-J2GbBSww_1iZJBLmH72MrImp1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.930,Hydra:6.0.562,FMLib:17.11.122.1 definitions=2023-01-20_06,2023-01-20_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduce new reorder drain API: `rte_reorder_drain_up_to_seqn` - exhaustively drain all inserted mbufs up to the given sequence number. Currently there's no ability to force the drain from reorder buffer, i.e. only consecutive ordered or ready packets could be drained. New function would give user ability to drain inserted packets, without need to wait for missing or newer packets. Signed-off-by: Volodymyr Fialko --- lib/reorder/rte_reorder.c | 77 +++++++++++++++++++++++++++++++++++++++ lib/reorder/rte_reorder.h | 25 +++++++++++++ lib/reorder/version.map | 1 + 3 files changed, 103 insertions(+) diff --git a/lib/reorder/rte_reorder.c b/lib/reorder/rte_reorder.c index 385ee479da..57cc1b286b 100644 --- a/lib/reorder/rte_reorder.c +++ b/lib/reorder/rte_reorder.c @@ -406,3 +406,80 @@ rte_reorder_drain(struct rte_reorder_buffer *b, struct rte_mbuf **mbufs, return drain_cnt; } + +/* Binary search seqn in ready buffer */ +static inline uint32_t +ready_buffer_seqn_find(const struct cir_buffer *ready_buf, const uint32_t seqn) +{ + uint32_t mid, value, position, high; + uint32_t low = 0; + + if (ready_buf->tail > ready_buf->head) + high = ready_buf->tail - ready_buf->head; + else + high = ready_buf->head - ready_buf->tail; + + while (low <= high) { + mid = low + (high - low) / 2; + position = (ready_buf->tail + mid) & ready_buf->mask; + value = *rte_reorder_seqn(ready_buf->entries[position]); + if (seqn == value) + return mid; + else if (seqn > value) + low = mid + 1; + else + high = mid - 1; + } + + return low; +} + +unsigned int +rte_reorder_drain_up_to_seqn(struct rte_reorder_buffer *b, struct rte_mbuf **mbufs, + const unsigned int max_mbufs, const rte_reorder_seqn_t seqn) +{ + uint32_t i, position, offset; + unsigned int drain_cnt = 0; + + struct cir_buffer *order_buf = &b->order_buf, + *ready_buf = &b->ready_buf; + + /* Seqn in Ready buffer */ + if (seqn < b->min_seqn) { + /* All sequence numbers are higher then given */ + if (*rte_reorder_seqn(ready_buf->entries[ready_buf->tail]) > seqn) + return 0; + + offset = ready_buffer_seqn_find(ready_buf, seqn); + + for (i = 0; (i < offset) && (drain_cnt < max_mbufs); i++) { + position = (ready_buf->tail + i) & ready_buf->mask; + mbufs[drain_cnt++] = ready_buf->entries[position]; + ready_buf->entries[position] = NULL; + } + ready_buf->tail = (ready_buf->tail + i) & ready_buf->mask; + + return drain_cnt; + } + + /* Seqn in Order buffer, add all buffers from Ready buffer */ + while ((drain_cnt < max_mbufs) && (ready_buf->tail != ready_buf->head)) { + mbufs[drain_cnt++] = ready_buf->entries[ready_buf->tail]; + ready_buf->entries[ready_buf->tail] = NULL; + ready_buf->tail = (ready_buf->tail + 1) & ready_buf->mask; + } + + /* Fetch buffers from Order buffer up to given sequence number (exclusive) */ + offset = RTE_MIN(seqn - b->min_seqn, b->order_buf.size); + for (i = 0; (i < offset) && (drain_cnt < max_mbufs); i++) { + position = (order_buf->head + i) & order_buf->mask; + if (order_buf->entries[position] == NULL) + continue; + mbufs[drain_cnt++] = order_buf->entries[position]; + order_buf->entries[position] = NULL; + } + b->min_seqn += i; + order_buf->head = (order_buf->head + i) & order_buf->mask; + + return drain_cnt; +} diff --git a/lib/reorder/rte_reorder.h b/lib/reorder/rte_reorder.h index 5abdb258e2..c5b354b53d 100644 --- a/lib/reorder/rte_reorder.h +++ b/lib/reorder/rte_reorder.h @@ -167,6 +167,31 @@ unsigned int rte_reorder_drain(struct rte_reorder_buffer *b, struct rte_mbuf **mbufs, unsigned max_mbufs); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Fetch set of reordered packets up to specified sequence number (exclusive) + * + * Returns a set of in-order packets from the reorder buffer structure. + * Gaps may be present since reorder buffer will try to fetch all possible packets up to given + * sequence number. + * + * @param b + * Reorder buffer instance from which packets are to be drained. + * @param mbufs + * Array of mbufs where reordered packets will be inserted from reorder buffer. + * @param max_mbufs + * The number of elements in the mbufs array. + * @param seqn + * Sequence number up to which buffer will be drained. + * @return + * Number of mbuf pointers written to mbufs. 0 <= N < max_mbufs. + */ +__rte_experimental +unsigned int +rte_reorder_drain_up_to_seqn(struct rte_reorder_buffer *b, struct rte_mbuf **mbufs, + unsigned int max_mbufs, rte_reorder_seqn_t seqn); #ifdef __cplusplus } #endif diff --git a/lib/reorder/version.map b/lib/reorder/version.map index e114d17730..e3f41ea7ef 100644 --- a/lib/reorder/version.map +++ b/lib/reorder/version.map @@ -16,4 +16,5 @@ EXPERIMENTAL { global: rte_reorder_seqn_dynfield_offset; + rte_reorder_drain_up_to_seqn; };