From patchwork Fri Jan 20 10:21:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Volodymyr Fialko X-Patchwork-Id: 122416 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 65D984242B; Fri, 20 Jan 2023 11:22:16 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 53D6A4114B; Fri, 20 Jan 2023 11:22:13 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id D8A0B400D5 for ; Fri, 20 Jan 2023 11:22:08 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 30K7NYEk016078; Fri, 20 Jan 2023 02:22:08 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Fa5cgNSUlKsuhxHA7QxbwWpFPyNaB9aMXqdn5vqOero=; b=DRB1DUuEeXdimtKv1L+VleZlp/Vx2m+a9jO+RnzBzE8P63VtKn0oylMLttnlNHHF/lyy zXmQInocGfEsWu8Sb8bjzATCu4MnLS76aBFcr+A4vZK39yVRdyGvKszDh3rQadT4tiOF sof2rQTYDI9PXO+aYAfu6aMuWWJXMM5nyjkUTv3C4O2qQeFnXNqoOxWdFk/F0s5tbQ5j vv9nbrIs/d8myL/rfzOvH4Q2tDOsta/UlEl+1i/+qyBrHdUiyMdwaHnWD6gb9ZCIEad3 hUMXpDFvRGR4pppRknw55WWqZgXAWppV4OdeBvRCgFhiyoq1FQiOrZJME4xpE2mPG/eR Mg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3n71cexxm9-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 20 Jan 2023 02:22:08 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Fri, 20 Jan 2023 02:21:59 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.42 via Frontend Transport; Fri, 20 Jan 2023 02:21:59 -0800 Received: from cavium-DT10.. (unknown [10.28.34.39]) by maili.marvell.com (Postfix) with ESMTP id 6A7193F7074; Fri, 20 Jan 2023 02:21:57 -0800 (PST) From: Volodymyr Fialko To: , Reshma Pattan CC: , , Volodymyr Fialko Subject: [PATCH 1/3] reorder: add new drain up to seq number API Date: Fri, 20 Jan 2023 11:21:44 +0100 Message-ID: <20230120102146.4035460-2-vfialko@marvell.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230120102146.4035460-1-vfialko@marvell.com> References: <20230120102146.4035460-1-vfialko@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 9xQClv-J2GbBSww_1iZJBLmH72MrImp1 X-Proofpoint-ORIG-GUID: 9xQClv-J2GbBSww_1iZJBLmH72MrImp1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.930,Hydra:6.0.562,FMLib:17.11.122.1 definitions=2023-01-20_06,2023-01-20_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduce new reorder drain API: `rte_reorder_drain_up_to_seqn` - exhaustively drain all inserted mbufs up to the given sequence number. Currently there's no ability to force the drain from reorder buffer, i.e. only consecutive ordered or ready packets could be drained. New function would give user ability to drain inserted packets, without need to wait for missing or newer packets. Signed-off-by: Volodymyr Fialko --- lib/reorder/rte_reorder.c | 77 +++++++++++++++++++++++++++++++++++++++ lib/reorder/rte_reorder.h | 25 +++++++++++++ lib/reorder/version.map | 1 + 3 files changed, 103 insertions(+) diff --git a/lib/reorder/rte_reorder.c b/lib/reorder/rte_reorder.c index 385ee479da..57cc1b286b 100644 --- a/lib/reorder/rte_reorder.c +++ b/lib/reorder/rte_reorder.c @@ -406,3 +406,80 @@ rte_reorder_drain(struct rte_reorder_buffer *b, struct rte_mbuf **mbufs, return drain_cnt; } + +/* Binary search seqn in ready buffer */ +static inline uint32_t +ready_buffer_seqn_find(const struct cir_buffer *ready_buf, const uint32_t seqn) +{ + uint32_t mid, value, position, high; + uint32_t low = 0; + + if (ready_buf->tail > ready_buf->head) + high = ready_buf->tail - ready_buf->head; + else + high = ready_buf->head - ready_buf->tail; + + while (low <= high) { + mid = low + (high - low) / 2; + position = (ready_buf->tail + mid) & ready_buf->mask; + value = *rte_reorder_seqn(ready_buf->entries[position]); + if (seqn == value) + return mid; + else if (seqn > value) + low = mid + 1; + else + high = mid - 1; + } + + return low; +} + +unsigned int +rte_reorder_drain_up_to_seqn(struct rte_reorder_buffer *b, struct rte_mbuf **mbufs, + const unsigned int max_mbufs, const rte_reorder_seqn_t seqn) +{ + uint32_t i, position, offset; + unsigned int drain_cnt = 0; + + struct cir_buffer *order_buf = &b->order_buf, + *ready_buf = &b->ready_buf; + + /* Seqn in Ready buffer */ + if (seqn < b->min_seqn) { + /* All sequence numbers are higher then given */ + if (*rte_reorder_seqn(ready_buf->entries[ready_buf->tail]) > seqn) + return 0; + + offset = ready_buffer_seqn_find(ready_buf, seqn); + + for (i = 0; (i < offset) && (drain_cnt < max_mbufs); i++) { + position = (ready_buf->tail + i) & ready_buf->mask; + mbufs[drain_cnt++] = ready_buf->entries[position]; + ready_buf->entries[position] = NULL; + } + ready_buf->tail = (ready_buf->tail + i) & ready_buf->mask; + + return drain_cnt; + } + + /* Seqn in Order buffer, add all buffers from Ready buffer */ + while ((drain_cnt < max_mbufs) && (ready_buf->tail != ready_buf->head)) { + mbufs[drain_cnt++] = ready_buf->entries[ready_buf->tail]; + ready_buf->entries[ready_buf->tail] = NULL; + ready_buf->tail = (ready_buf->tail + 1) & ready_buf->mask; + } + + /* Fetch buffers from Order buffer up to given sequence number (exclusive) */ + offset = RTE_MIN(seqn - b->min_seqn, b->order_buf.size); + for (i = 0; (i < offset) && (drain_cnt < max_mbufs); i++) { + position = (order_buf->head + i) & order_buf->mask; + if (order_buf->entries[position] == NULL) + continue; + mbufs[drain_cnt++] = order_buf->entries[position]; + order_buf->entries[position] = NULL; + } + b->min_seqn += i; + order_buf->head = (order_buf->head + i) & order_buf->mask; + + return drain_cnt; +} diff --git a/lib/reorder/rte_reorder.h b/lib/reorder/rte_reorder.h index 5abdb258e2..c5b354b53d 100644 --- a/lib/reorder/rte_reorder.h +++ b/lib/reorder/rte_reorder.h @@ -167,6 +167,31 @@ unsigned int rte_reorder_drain(struct rte_reorder_buffer *b, struct rte_mbuf **mbufs, unsigned max_mbufs); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Fetch set of reordered packets up to specified sequence number (exclusive) + * + * Returns a set of in-order packets from the reorder buffer structure. + * Gaps may be present since reorder buffer will try to fetch all possible packets up to given + * sequence number. + * + * @param b + * Reorder buffer instance from which packets are to be drained. + * @param mbufs + * Array of mbufs where reordered packets will be inserted from reorder buffer. + * @param max_mbufs + * The number of elements in the mbufs array. + * @param seqn + * Sequence number up to which buffer will be drained. + * @return + * Number of mbuf pointers written to mbufs. 0 <= N < max_mbufs. + */ +__rte_experimental +unsigned int +rte_reorder_drain_up_to_seqn(struct rte_reorder_buffer *b, struct rte_mbuf **mbufs, + unsigned int max_mbufs, rte_reorder_seqn_t seqn); #ifdef __cplusplus } #endif diff --git a/lib/reorder/version.map b/lib/reorder/version.map index e114d17730..e3f41ea7ef 100644 --- a/lib/reorder/version.map +++ b/lib/reorder/version.map @@ -16,4 +16,5 @@ EXPERIMENTAL { global: rte_reorder_seqn_dynfield_offset; + rte_reorder_drain_up_to_seqn; }; From patchwork Fri Jan 20 10:21:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Volodymyr Fialko X-Patchwork-Id: 122417 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8355E4242B; Fri, 20 Jan 2023 11:22:21 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3872142DA1; Fri, 20 Jan 2023 11:22:15 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 72F4C40150 for ; Fri, 20 Jan 2023 11:22:10 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 30K7NYEm016078; Fri, 20 Jan 2023 02:22:09 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=ZCaoGYbFANsqL+QqLiH3F/jHPM124Hrh9Ka2oXHVbO4=; b=I/eP7qDxIGejB7DeGXc+qaWetGVHn++EdWX9NLJGSa9+B0nLmTsG5g6BuapZh9DSYEnQ A/IFM013HjEgTH5TFO+yHlZ7QiW2wdVpFlvr933BVd52iJbJofnTr7v7i8s7vR/ycvi7 vAsbzgExXlgqPTlxfupYX4y+jVDHmNcWgeZJHTrgrk4sZG2YsdRu23OuOQoEZNH7VyXr itasSYH1lyIoC66qfR3Vn6rNJT/PD69etJbcqYLYX1GxVN2cZFArOiMEU8ziatMWZcT5 vxcjGNmb7mOh2w3aNEyosIKoqC6xpI+ib4anEmTeyW6AKaE5G7Ax90rjZY4fkvp/u5lF ug== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3n71cexxm9-5 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 20 Jan 2023 02:22:09 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Fri, 20 Jan 2023 02:22:02 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.42 via Frontend Transport; Fri, 20 Jan 2023 02:22:02 -0800 Received: from cavium-DT10.. (unknown [10.28.34.39]) by maili.marvell.com (Postfix) with ESMTP id 6DB843F704F; Fri, 20 Jan 2023 02:22:00 -0800 (PST) From: Volodymyr Fialko To: , Reshma Pattan CC: , , Volodymyr Fialko Subject: [PATCH 2/3] reorder: add ability to set min sequence number Date: Fri, 20 Jan 2023 11:21:45 +0100 Message-ID: <20230120102146.4035460-3-vfialko@marvell.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230120102146.4035460-1-vfialko@marvell.com> References: <20230120102146.4035460-1-vfialko@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 2pbjtqczzKmConR5jIw-NfmCg_xf81PF X-Proofpoint-ORIG-GUID: 2pbjtqczzKmConR5jIw-NfmCg_xf81PF X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.930,Hydra:6.0.562,FMLib:17.11.122.1 definitions=2023-01-20_06,2023-01-20_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add API `rte_reorder_min_seqn_set` to allow user to specify minimum sequence number. Currently sequence number of first inserted packet is used as minimum sequence number. But for case when we want to wait for packets before the received one this will not work. Signed-off-by: Volodymyr Fialko --- lib/reorder/rte_reorder.c | 31 +++++++++++++++++++++++++++++++ lib/reorder/rte_reorder.h | 18 ++++++++++++++++++ lib/reorder/version.map | 1 + 3 files changed, 50 insertions(+) diff --git a/lib/reorder/rte_reorder.c b/lib/reorder/rte_reorder.c index 57cc1b286b..0d4c7209f9 100644 --- a/lib/reorder/rte_reorder.c +++ b/lib/reorder/rte_reorder.c @@ -483,3 +483,34 @@ rte_reorder_drain_up_to_seqn(struct rte_reorder_buffer *b, struct rte_mbuf **mbu return drain_cnt; } + +static bool +rte_reorder_is_empty(const struct rte_reorder_buffer *b) +{ + const struct cir_buffer *order_buf = &b->order_buf, *ready_buf = &b->ready_buf; + unsigned int i; + + /* Ready buffer does not have gaps */ + if (ready_buf->tail != ready_buf->head) + return false; + + /* Order buffer could have gaps, iterate */ + for (i = 0; i < order_buf->size; i++) { + if (order_buf->entries[i] != NULL) + return false; + } + + return true; +} + +unsigned int +rte_reorder_min_seqn_set(struct rte_reorder_buffer *b, uint32_t min_seqn) +{ + if (!rte_reorder_is_empty(b)) + return -ENOTEMPTY; + + b->min_seqn = min_seqn; + b->is_initialized = true; + + return 0; +} diff --git a/lib/reorder/rte_reorder.h b/lib/reorder/rte_reorder.h index c5b354b53d..9f6710bad2 100644 --- a/lib/reorder/rte_reorder.h +++ b/lib/reorder/rte_reorder.h @@ -192,6 +192,24 @@ __rte_experimental unsigned int rte_reorder_drain_up_to_seqn(struct rte_reorder_buffer *b, struct rte_mbuf **mbufs, unsigned int max_mbufs, rte_reorder_seqn_t seqn); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Set minimum sequence number of packet allowed to be buffered. + * To successfully set new value reorder buffer has to be empty(after create, reset or drain_all). + * + * @param b + * Empty reorder buffer instance to modify. + * @param min_seqn + * New sequence number to set. + * @return + * 0 on success, a negative value otherwise. + */ +__rte_experimental +unsigned int +rte_reorder_min_seqn_set(struct rte_reorder_buffer *b, uint32_t min_seqn); #ifdef __cplusplus } #endif diff --git a/lib/reorder/version.map b/lib/reorder/version.map index e3f41ea7ef..aafdf0b5ae 100644 --- a/lib/reorder/version.map +++ b/lib/reorder/version.map @@ -17,4 +17,5 @@ EXPERIMENTAL { rte_reorder_seqn_dynfield_offset; rte_reorder_drain_up_to_seqn; + rte_reorder_min_seqn_set; }; From patchwork Fri Jan 20 10:21:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Volodymyr Fialko X-Patchwork-Id: 122418 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5973E4242B; Fri, 20 Jan 2023 11:22:26 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 496CC42DAC; Fri, 20 Jan 2023 11:22:16 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 03E3740223 for ; Fri, 20 Jan 2023 11:22:10 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 30K7NYEn016078; Fri, 20 Jan 2023 02:22:10 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=22sj6aSWS5lb5ZySePULeppcDUnXnS00IKiG+S3Y90k=; b=D1e0Mo2OYBMJ29U1/YQp2zCsXnMgfq7DVV4BDdkHpPeJaimFY51ToDGVVq2nvptSJUz6 C/eOG7WPKoWjUowJmI8262F1WLlbcRdcXLFoRrItIHeN3LacX3tAdLu3tDNnzLiS/QVt E/S9y+3TyoX6dey3nSqMLrL4DXCU/D2IlHqqE+bwN8KbPb3KOFwidL8BI+X/AHLQ6F4G J18bF3nHfd9U665dsoCQ7EngO/MngfMFzAONitpW/9w3NIx/l+0/Ucl6RbxSw/VATWZx 7iOegnsgtomC4kgvQx+2dffezcHGWVPLdkFzD6tZ8ro2ayxZZWjcL5nbiQ0GfaCIJeHQ Sg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3n71cexxm9-6 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 20 Jan 2023 02:22:10 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Fri, 20 Jan 2023 02:22:05 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.42 via Frontend Transport; Fri, 20 Jan 2023 02:22:05 -0800 Received: from cavium-DT10.. (unknown [10.28.34.39]) by maili.marvell.com (Postfix) with ESMTP id 90A003F7076; Fri, 20 Jan 2023 02:22:03 -0800 (PST) From: Volodymyr Fialko To: , Reshma Pattan CC: , , Volodymyr Fialko Subject: [PATCH 3/3] test/reorder: add cases to cover new API Date: Fri, 20 Jan 2023 11:21:46 +0100 Message-ID: <20230120102146.4035460-4-vfialko@marvell.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230120102146.4035460-1-vfialko@marvell.com> References: <20230120102146.4035460-1-vfialko@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: EHkB_sUDkcquMGYRAl3GqeuRsz5oOoTd X-Proofpoint-ORIG-GUID: EHkB_sUDkcquMGYRAl3GqeuRsz5oOoTd X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.930,Hydra:6.0.562,FMLib:17.11.122.1 definitions=2023-01-20_06,2023-01-20_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add new test cases to cover `rte_reorder_drain_up_to_seqn` and `rte_reorder_min_seqn_set`. Signed-off-by: Volodymyr Fialko --- app/test/test_reorder.c | 160 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 160 insertions(+) diff --git a/app/test/test_reorder.c b/app/test/test_reorder.c index f0714a5c18..c345a72e0c 100644 --- a/app/test/test_reorder.c +++ b/app/test/test_reorder.c @@ -335,6 +335,164 @@ test_reorder_drain(void) return ret; } +static void +buffer_to_reorder_move(struct rte_mbuf **mbuf, struct rte_reorder_buffer *b) +{ + rte_reorder_insert(b, *mbuf); + *mbuf = NULL; +} + +static int +test_reorder_drain_up_to_seqn(void) +{ + struct rte_mempool *p = test_params->p; + struct rte_reorder_buffer *b = NULL; + const unsigned int num_bufs = 10; + const unsigned int size = 4; + unsigned int i, cnt; + int ret = 0; + + struct rte_mbuf *bufs[num_bufs]; + struct rte_mbuf *robufs[num_bufs]; + + /* initialize all robufs to NULL */ + memset(robufs, 0, sizeof(robufs)); + + /* This would create a reorder buffer instance consisting of: + * reorder_seq = 0 + * ready_buf: RB[size] = {NULL, NULL, NULL, NULL} + * order_buf: OB[size] = {NULL, NULL, NULL, NULL} + */ + b = rte_reorder_create("test_drain_up_to_seqn", rte_socket_id(), size); + TEST_ASSERT_NOT_NULL(b, "Failed to create reorder buffer"); + + for (i = 0; i < num_bufs; i++) { + bufs[i] = rte_pktmbuf_alloc(p); + TEST_ASSERT_NOT_NULL(bufs[i], "Packet allocation failed\n"); + *rte_reorder_seqn(bufs[i]) = i; + } + + /* Insert packet with seqn 1 and 3: + * RB[] = {NULL, NULL, NULL, NULL} + * OB[] = {1, 2, 3, NULL} + */ + buffer_to_reorder_move(&bufs[1], b); + buffer_to_reorder_move(&bufs[2], b); + buffer_to_reorder_move(&bufs[3], b); + /* Draining 1, 2 */ + cnt = rte_reorder_drain_up_to_seqn(b, robufs, num_bufs, 3); + if (cnt != 2) { + printf("%s:%d:%d: number of expected packets not drained\n", + __func__, __LINE__, cnt); + ret = -1; + goto exit; + } + for (i = 0; i < 2; i++) + rte_pktmbuf_free(robufs[i]); + memset(robufs, 0, sizeof(robufs)); + + /* Insert more packets + * RB[] = {NULL, NULL, NULL, NULL} + * OB[] = {3, 4, NULL, 6} + */ + buffer_to_reorder_move(&bufs[4], b); + buffer_to_reorder_move(&bufs[6], b); + /* Insert more packets to utilize Ready buffer + * RB[] = {3, NULL, 5, 6} + * OB[] = {NULL, NULL, 8, NULL} + */ + buffer_to_reorder_move(&bufs[8], b); + + /* Drain 3 and 5 */ + cnt = rte_reorder_drain_up_to_seqn(b, robufs, num_bufs, 6); + if (cnt != 2) { + printf("%s:%d:%d: number of expected packets not drained\n", + __func__, __LINE__, cnt); + ret = -1; + goto exit; + } + for (i = 0; i < 2; i++) + rte_pktmbuf_free(robufs[i]); + memset(robufs, 0, sizeof(robufs)); + + ret = 0; +exit: + rte_reorder_free(b); + for (i = 0; i < num_bufs; i++) { + rte_pktmbuf_free(bufs[i]); + rte_pktmbuf_free(robufs[i]); + } + return ret; +} + +static int +test_reorder_set_seqn(void) +{ + struct rte_mempool *p = test_params->p; + struct rte_reorder_buffer *b = NULL; + const unsigned int num_bufs = 7; + const unsigned int size = 4; + unsigned int i; + int ret = 0; + + struct rte_mbuf *bufs[num_bufs]; + + /* This would create a reorder buffer instance consisting of: + * reorder_seq = 0 + * ready_buf: RB[size] = {NULL, NULL, NULL, NULL} + * order_buf: OB[size] = {NULL, NULL, NULL, NULL} + */ + b = rte_reorder_create("test_min_seqn_set", rte_socket_id(), size); + TEST_ASSERT_NOT_NULL(b, "Failed to create reorder buffer"); + + for (i = 0; i < num_bufs; i++) { + bufs[i] = rte_pktmbuf_alloc(p); + if (bufs[i] == NULL) { + printf("Packet allocation failed\n"); + goto exit; + } + *rte_reorder_seqn(bufs[i]) = i; + } + + ret = rte_reorder_min_seqn_set(b, 5); + if (ret != 0) { + printf("%s:%d: Error in setting min sequence number\n", __func__, __LINE__); + ret = -1; + goto exit; + } + + ret = rte_reorder_insert(b, bufs[0]); + if (ret >= 0) { + printf("%s:%d: Insertion with value less the min seq number\n", __func__, __LINE__); + ret = -1; + goto exit; + } + + ret = rte_reorder_insert(b, bufs[5]); + if (ret != 0) { + printf("%s:%d: Error inserting packet with valid seqn\n", __func__, __LINE__); + ret = -1; + goto exit; + } + bufs[5] = NULL; + + ret = rte_reorder_min_seqn_set(b, 0); + if (ret >= 0) { + printf("%s:%d: Error in setting min sequence number with non-empty buffer\n", + __func__, __LINE__); + ret = -1; + goto exit; + } + + ret = 0; +exit: + rte_reorder_free(b); + for (i = 0; i < num_bufs; i++) + rte_pktmbuf_free(bufs[i]); + + return ret; +} + static int test_setup(void) { @@ -385,6 +543,8 @@ static struct unit_test_suite reorder_test_suite = { TEST_CASE(test_reorder_free), TEST_CASE(test_reorder_insert), TEST_CASE(test_reorder_drain), + TEST_CASE(test_reorder_drain_up_to_seqn), + TEST_CASE(test_reorder_set_seqn), TEST_CASES_END() } };