From patchwork Mon Feb 20 10:48:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Volodymyr Fialko X-Patchwork-Id: 124185 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 855C741CE5; Mon, 20 Feb 2023 11:48:49 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5A52C43019; Mon, 20 Feb 2023 11:48:49 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id AFC4340395 for ; Mon, 20 Feb 2023 11:48:47 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 31K8X9RT009636; Mon, 20 Feb 2023 02:48:47 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=98Urg/SGOx1tTSEqMo1XyxWmOfghjSWY1yQewPDzh94=; b=dtwMIFo5bpul2m8VZtBW3cncdb7PxFVOuJazSneeYwVGOrpjJTy9sRth0XeRaoML/wEr ocoWTabXxqLbdobh/hdGnH01vSYbhzix3wE0ChFmjWG9zbZGj3gV500igPEQkXczdCRE 2Xz90XtFKjsuzGhbHKahgfuGd87UK6H6y793fu9av6t4QOLIjUZwhEB4O7bej2qr9OE0 jjrL1XgTbe5O84qEaIxbdADC0FwVcLExir6UAQ95Xaf0RtlieE54cXHvTcAGFpYZneoS fAZXGFhvedKwn2Zfw98tDDMNTFMNuiOihW7rGdZxFnOW8YhSnji3Yvlw+lQX3fPaJ3oF Dg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3nty2yxpmk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 20 Feb 2023 02:48:47 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Mon, 20 Feb 2023 02:48:43 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.42 via Frontend Transport; Mon, 20 Feb 2023 02:48:43 -0800 Received: from cavium-DT10.. (unknown [10.28.34.39]) by maili.marvell.com (Postfix) with ESMTP id B6B6B3F70A6; Mon, 20 Feb 2023 02:48:41 -0800 (PST) From: Volodymyr Fialko To: , Reshma Pattan CC: , , , , Volodymyr Fialko Subject: [PATCH v2 1/2] reorder: add new drain up to seq number API Date: Mon, 20 Feb 2023 11:48:28 +0100 Message-ID: <20230220104830.4086788-2-vfialko@marvell.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230220104830.4086788-1-vfialko@marvell.com> References: <20230120102146.4035460-1-vfialko@marvell.com> <20230220104830.4086788-1-vfialko@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: VH-OHSwkSR1f7VbfPI2gmd2f_OmUy-nl X-Proofpoint-ORIG-GUID: VH-OHSwkSR1f7VbfPI2gmd2f_OmUy-nl X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.930,Hydra:6.0.562,FMLib:17.11.170.22 definitions=2023-02-20_08,2023-02-17_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduce new reorder drain API: `rte_reorder_drain_up_to_seqn` - exhaustively drain all inserted mbufs up to the given sequence number. Currently there's no ability to force the drain from reorder buffer, i.e. only consecutive ordered or ready packets could be drained. New function would give user ability to drain inserted packets, without need to wait for missing or newer packets. Signed-off-by: Volodymyr Fialko --- app/test/test_reorder.c | 91 +++++++++++++++++++++++++++++++++++++++ lib/reorder/rte_reorder.c | 78 +++++++++++++++++++++++++++++++++ lib/reorder/rte_reorder.h | 26 +++++++++++ lib/reorder/version.map | 2 + 4 files changed, 197 insertions(+) diff --git a/app/test/test_reorder.c b/app/test/test_reorder.c index 7b5e590bac..a37dc28f65 100644 --- a/app/test/test_reorder.c +++ b/app/test/test_reorder.c @@ -337,6 +337,96 @@ test_reorder_drain(void) return ret; } +static void +buffer_to_reorder_move(struct rte_mbuf **mbuf, struct rte_reorder_buffer *b) +{ + rte_reorder_insert(b, *mbuf); + *mbuf = NULL; +} + +static int +test_reorder_drain_up_to_seqn(void) +{ + struct rte_mempool *p = test_params->p; + struct rte_reorder_buffer *b = NULL; + const unsigned int num_bufs = 10; + const unsigned int size = 4; + unsigned int i, cnt; + int ret = 0; + + struct rte_mbuf *bufs[num_bufs]; + struct rte_mbuf *robufs[num_bufs]; + + /* initialize all robufs to NULL */ + memset(robufs, 0, sizeof(robufs)); + + /* This would create a reorder buffer instance consisting of: + * reorder_seq = 0 + * ready_buf: RB[size] = {NULL, NULL, NULL, NULL} + * order_buf: OB[size] = {NULL, NULL, NULL, NULL} + */ + b = rte_reorder_create("test_drain_up_to_seqn", rte_socket_id(), size); + TEST_ASSERT_NOT_NULL(b, "Failed to create reorder buffer"); + + for (i = 0; i < num_bufs; i++) { + bufs[i] = rte_pktmbuf_alloc(p); + TEST_ASSERT_NOT_NULL(bufs[i], "Packet allocation failed\n"); + *rte_reorder_seqn(bufs[i]) = i; + } + + /* Insert packet with seqn 1 and 3: + * RB[] = {NULL, NULL, NULL, NULL} + * OB[] = {1, 2, 3, NULL} + */ + buffer_to_reorder_move(&bufs[1], b); + buffer_to_reorder_move(&bufs[2], b); + buffer_to_reorder_move(&bufs[3], b); + /* Draining 1, 2 */ + cnt = rte_reorder_drain_up_to_seqn(b, robufs, num_bufs, 3); + if (cnt != 2) { + printf("%s:%d:%d: number of expected packets not drained\n", + __func__, __LINE__, cnt); + ret = -1; + goto exit; + } + for (i = 0; i < 2; i++) + rte_pktmbuf_free(robufs[i]); + memset(robufs, 0, sizeof(robufs)); + + /* Insert more packets + * RB[] = {NULL, NULL, NULL, NULL} + * OB[] = {3, 4, NULL, 6} + */ + buffer_to_reorder_move(&bufs[4], b); + buffer_to_reorder_move(&bufs[6], b); + /* Insert more packets to utilize Ready buffer + * RB[] = {3, NULL, 5, 6} + * OB[] = {NULL, NULL, 8, NULL} + */ + buffer_to_reorder_move(&bufs[8], b); + + /* Drain 3 and 5 */ + cnt = rte_reorder_drain_up_to_seqn(b, robufs, num_bufs, 6); + if (cnt != 2) { + printf("%s:%d:%d: number of expected packets not drained\n", + __func__, __LINE__, cnt); + ret = -1; + goto exit; + } + for (i = 0; i < 2; i++) + rte_pktmbuf_free(robufs[i]); + memset(robufs, 0, sizeof(robufs)); + + ret = 0; +exit: + rte_reorder_free(b); + for (i = 0; i < num_bufs; i++) { + rte_pktmbuf_free(bufs[i]); + rte_pktmbuf_free(robufs[i]); + } + return ret; +} + static int test_setup(void) { @@ -387,6 +477,7 @@ static struct unit_test_suite reorder_test_suite = { TEST_CASE(test_reorder_free), TEST_CASE(test_reorder_insert), TEST_CASE(test_reorder_drain), + TEST_CASE(test_reorder_drain_up_to_seqn), TEST_CASES_END() } }; diff --git a/lib/reorder/rte_reorder.c b/lib/reorder/rte_reorder.c index b38e71f460..bd0e1f8793 100644 --- a/lib/reorder/rte_reorder.c +++ b/lib/reorder/rte_reorder.c @@ -407,3 +407,81 @@ rte_reorder_drain(struct rte_reorder_buffer *b, struct rte_mbuf **mbufs, return drain_cnt; } + +/* Binary search seqn in ready buffer */ +static inline uint32_t +ready_buffer_seqn_find(const struct cir_buffer *ready_buf, const uint32_t seqn) +{ + uint32_t mid, value, position, high; + uint32_t low = 0; + + if (ready_buf->tail > ready_buf->head) + high = ready_buf->tail - ready_buf->head; + else + high = ready_buf->head - ready_buf->tail; + + while (low <= high) { + mid = low + (high - low) / 2; + position = (ready_buf->tail + mid) & ready_buf->mask; + value = *rte_reorder_seqn(ready_buf->entries[position]); + if (seqn == value) + return mid; + else if (seqn > value) + low = mid + 1; + else + high = mid - 1; + } + + return low; +} + +unsigned int +rte_reorder_drain_up_to_seqn(struct rte_reorder_buffer *b, struct rte_mbuf **mbufs, + const unsigned int max_mbufs, const rte_reorder_seqn_t seqn) +{ + uint32_t i, position, offset; + unsigned int drain_cnt = 0; + + struct cir_buffer *order_buf = &b->order_buf, + *ready_buf = &b->ready_buf; + + /* Seqn in Ready buffer */ + if (seqn < b->min_seqn) { + /* All sequence numbers are higher then given */ + if ((ready_buf->tail == ready_buf->head) || + (*rte_reorder_seqn(ready_buf->entries[ready_buf->tail]) > seqn)) + return 0; + + offset = ready_buffer_seqn_find(ready_buf, seqn); + + for (i = 0; (i < offset) && (drain_cnt < max_mbufs); i++) { + position = (ready_buf->tail + i) & ready_buf->mask; + mbufs[drain_cnt++] = ready_buf->entries[position]; + ready_buf->entries[position] = NULL; + } + ready_buf->tail = (ready_buf->tail + i) & ready_buf->mask; + + return drain_cnt; + } + + /* Seqn in Order buffer, add all buffers from Ready buffer */ + while ((drain_cnt < max_mbufs) && (ready_buf->tail != ready_buf->head)) { + mbufs[drain_cnt++] = ready_buf->entries[ready_buf->tail]; + ready_buf->entries[ready_buf->tail] = NULL; + ready_buf->tail = (ready_buf->tail + 1) & ready_buf->mask; + } + + /* Fetch buffers from Order buffer up to given sequence number (exclusive) */ + offset = RTE_MIN(seqn - b->min_seqn, b->order_buf.size); + for (i = 0; (i < offset) && (drain_cnt < max_mbufs); i++) { + position = (order_buf->head + i) & order_buf->mask; + if (order_buf->entries[position] == NULL) + continue; + mbufs[drain_cnt++] = order_buf->entries[position]; + order_buf->entries[position] = NULL; + } + b->min_seqn += i; + order_buf->head = (order_buf->head + i) & order_buf->mask; + + return drain_cnt; +} diff --git a/lib/reorder/rte_reorder.h b/lib/reorder/rte_reorder.h index 5abdb258e2..db740495be 100644 --- a/lib/reorder/rte_reorder.h +++ b/lib/reorder/rte_reorder.h @@ -167,6 +167,32 @@ unsigned int rte_reorder_drain(struct rte_reorder_buffer *b, struct rte_mbuf **mbufs, unsigned max_mbufs); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Fetch set of reordered packets up to specified sequence number (exclusive) + * + * Returns a set of in-order packets from the reorder buffer structure. + * Gaps may be present since reorder buffer will try to fetch all possible packets up to given + * sequence number. + * + * @param b + * Reorder buffer instance from which packets are to be drained. + * @param mbufs + * Array of mbufs where reordered packets will be inserted from reorder buffer. + * @param max_mbufs + * The number of elements in the mbufs array. + * @param seqn + * Sequence number up to which buffer will be drained. + * @return + * Number of mbuf pointers written to mbufs. 0 <= N < max_mbufs. + */ +__rte_experimental +unsigned int +rte_reorder_drain_up_to_seqn(struct rte_reorder_buffer *b, struct rte_mbuf **mbufs, + unsigned int max_mbufs, rte_reorder_seqn_t seqn); + #ifdef __cplusplus } #endif diff --git a/lib/reorder/version.map b/lib/reorder/version.map index e114d17730..d322da03bd 100644 --- a/lib/reorder/version.map +++ b/lib/reorder/version.map @@ -16,4 +16,6 @@ EXPERIMENTAL { global: rte_reorder_seqn_dynfield_offset; + # added in 23.03 + rte_reorder_drain_up_to_seqn; };