[v9] gro: fix reordering of packets in GRO layer

Message ID 20240117201014.425906-1-kumaraparamesh92@gmail.com (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers
Series [v9] gro: fix reordering of packets in GRO layer |

Checks

Context Check Description
ci/checkpatch warning coding style issues
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS

Commit Message

Kumara Parameshwaran Jan. 17, 2024, 8:10 p.m. UTC
  In the current implementation when a packet is received with
special TCP flag(s) set, only that packet is delivered out of order.
There could be already coalesced packets in the GRO table
belonging to the same flow but not delivered.
This fix makes sure that the entire segment is delivered with the
special flag(s) set which is how the Linux GRO is also implemented

Signed-off-by: Kumara Parameshwaran <kumaraparamesh92@gmail.com>
Co-authored-by: Kumara Parameshwaran <krathinavel@microsoft.com>
---
	If the received packet is not a pure ACK packet, we check if
	there are any previous packets in the flow, if present we indulge
	the received packet also in the coalescing logic and update the flags
	of the last recived packet to the entire segment which would avoid
	re-ordering.

	Lets say a case where P1(PSH), P2(ACK), P3(ACK)  are received in burst mode,
	P1 contains PSH flag and since it does not contain any prior packets in the flow
	we copy it to unprocess_packets and P2(ACK) and P3(ACK) are merged together.
	In the existing case the  P2,P3 would be delivered as single segment first and the
	unprocess_packets will be copied later which will cause reordering. With the patch
	copy the unprocess packets first and then the packets from the GRO table.

	Testing done
	The csum test-pmd was modified to support the following
	GET request of 10MB from client to server via test-pmd (static arp entries added in client
	and server). Enable GRO and TSO in test-pmd where the packets recived from the client mac
	would be sent to server mac and vice versa.
	In above testing, without the patch the client observerd re-ordering of 25 packets
	and with the patch there were no packet re-ordering observerd.

v2: 
	Fix warnings in commit and comment.
	Do not consider packet as candidate to merge if it contains SYN/RST flag.

v3:
	Fix warnings.

v4:
	Rebase with master.

v5:
	Adding co-author email
v6:
	Address review comments from the maintainer to restructure the code 
	and handle only special flags PSH,FIN

v7:
	Fix warnings and errors

v8:
	Fix warnings and errors

v9:
	Fix commit message 

 lib/gro/gro_tcp.h  | 11 ++++++++
 lib/gro/gro_tcp4.c | 67 +++++++++++++++++++++++++++++-----------------
 2 files changed, 54 insertions(+), 24 deletions(-)
  

Comments

Kumara Parameshwaran Jan. 17, 2024, 8:13 p.m. UTC | #1
On Thu, Jan 18, 2024 at 1:40 AM Kumara Parameshwaran <
kumaraparamesh92@gmail.com> wrote:

> In the current implementation when a packet is received with
> special TCP flag(s) set, only that packet is delivered out of order.
> There could be already coalesced packets in the GRO table
> belonging to the same flow but not delivered.
> This fix makes sure that the entire segment is delivered with the
> special flag(s) set which is how the Linux GRO is also implemented
>
> Signed-off-by: Kumara Parameshwaran <kumaraparamesh92@gmail.com>
> Co-authored-by: Kumara Parameshwaran <krathinavel@microsoft.com>
> ---
>         If the received packet is not a pure ACK packet, we check if
>         there are any previous packets in the flow, if present we indulge
>         the received packet also in the coalescing logic and update the
> flags
>         of the last recived packet to the entire segment which would avoid
>         re-ordering.
>
>         Lets say a case where P1(PSH), P2(ACK), P3(ACK)  are received in
> burst mode,
>         P1 contains PSH flag and since it does not contain any prior
> packets in the flow
>         we copy it to unprocess_packets and P2(ACK) and P3(ACK) are merged
> together.
>         In the existing case the  P2,P3 would be delivered as single
> segment first and the
>         unprocess_packets will be copied later which will cause
> reordering. With the patch
>         copy the unprocess packets first and then the packets from the GRO
> table.
>
>         Testing done
>         The csum test-pmd was modified to support the following
>         GET request of 10MB from client to server via test-pmd (static arp
> entries added in client
>         and server). Enable GRO and TSO in test-pmd where the packets
> recived from the client mac
>         would be sent to server mac and vice versa.
>         In above testing, without the patch the client observerd
> re-ordering of 25 packets
>         and with the patch there were no packet re-ordering observerd.
>
> v2:
>         Fix warnings in commit and comment.
>         Do not consider packet as candidate to merge if it contains
> SYN/RST flag.
>
> v3:
>         Fix warnings.
>
> v4:
>         Rebase with master.
>
> v5:
>         Adding co-author email
> v6:
>         Address review comments from the maintainer to restructure the
> code
>         and handle only special flags PSH,FIN
>
> v7:
>         Fix warnings and errors
>
> v8:
>         Fix warnings and errors
>
> v9:
>         Fix commit message
>
>  lib/gro/gro_tcp.h  | 11 ++++++++
>  lib/gro/gro_tcp4.c | 67 +++++++++++++++++++++++++++++-----------------
>  2 files changed, 54 insertions(+), 24 deletions(-)
>
> diff --git a/lib/gro/gro_tcp.h b/lib/gro/gro_tcp.h
> index d926c4b8cc..137a03bc96 100644
> --- a/lib/gro/gro_tcp.h
> +++ b/lib/gro/gro_tcp.h
> @@ -187,4 +187,15 @@ is_same_common_tcp_key(struct cmn_tcp_key *k1, struct
> cmn_tcp_key *k2)
>         return (!memcmp(k1, k2, sizeof(struct cmn_tcp_key)));
>  }
>
> +static inline void
> +update_tcp_hdr_flags(struct rte_tcp_hdr *tcp_hdr, struct rte_mbuf *pkt)
> +{
> +       struct rte_tcp_hdr *merged_tcp_hdr;
> +
> +       merged_tcp_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_tcp_hdr
> *, pkt->l2_len +
> +                                                       pkt->l3_len);
> +       merged_tcp_hdr->tcp_flags |= tcp_hdr->tcp_flags;
> +
> +}
> +
>  #endif
> diff --git a/lib/gro/gro_tcp4.c b/lib/gro/gro_tcp4.c
> index 6645de592b..8af5a8d8a9 100644
> --- a/lib/gro/gro_tcp4.c
> +++ b/lib/gro/gro_tcp4.c
> @@ -126,6 +126,7 @@ gro_tcp4_reassemble(struct rte_mbuf *pkt,
>         uint32_t item_idx;
>         uint32_t i, max_flow_num, remaining_flow_num;
>         uint8_t find;
> +       uint32_t item_start_idx;
>
>         /*
>          * Don't process the packet whose TCP header length is greater
> @@ -139,13 +140,6 @@ gro_tcp4_reassemble(struct rte_mbuf *pkt,
>         tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr + pkt->l3_len);
>         hdr_len = pkt->l2_len + pkt->l3_len + pkt->l4_len;
>
> -       /*
> -        * Don't process the packet which has FIN, SYN, RST, PSH, URG, ECE
> -        * or CWR set.
> -        */
> -       if (tcp_hdr->tcp_flags != RTE_TCP_ACK_FLAG)
> -               return -1;
> -
>         /* trim the tail padding bytes */
>         ip_tlen = rte_be_to_cpu_16(ipv4_hdr->total_length);
>         if (pkt->pkt_len > (uint32_t)(ip_tlen + pkt->l2_len))
> @@ -183,6 +177,7 @@ gro_tcp4_reassemble(struct rte_mbuf *pkt,
>                 if (tbl->flows[i].start_index != INVALID_ARRAY_INDEX) {
>                         if (is_same_tcp4_flow(tbl->flows[i].key, key)) {
>                                 find = 1;
> +                               item_start_idx = tbl->flows[i].start_index;
>                                 break;
>                         }
>                         remaining_flow_num--;
> @@ -190,28 +185,52 @@ gro_tcp4_reassemble(struct rte_mbuf *pkt,
>         }
>
>         if (find == 0) {
> -               sent_seq = rte_be_to_cpu_32(tcp_hdr->sent_seq);
> -               item_idx = insert_new_tcp_item(pkt, tbl->items,
> &tbl->item_num,
> -                                               tbl->max_item_num,
> start_time,
> -                                               INVALID_ARRAY_INDEX,
> sent_seq, ip_id,
> -                                               is_atomic);
> -               if (item_idx == INVALID_ARRAY_INDEX)
> +               /*
> +                * Add new flow to the table only if contains ACK flag
> with data.
> +                * Do not add any packets with additional tcp flags to the
> GRO table
> +                */
> +               if (tcp_hdr->tcp_flags == RTE_TCP_ACK_FLAG) {
> +                       sent_seq = rte_be_to_cpu_32(tcp_hdr->sent_seq);
> +                       item_idx = insert_new_tcp_item(pkt, tbl->items,
> &tbl->item_num,
> +                                                       tbl->max_item_num,
> start_time,
> +
>  INVALID_ARRAY_INDEX, sent_seq, ip_id,
> +                                                       is_atomic);
> +                       if (item_idx == INVALID_ARRAY_INDEX)
> +                               return -1;
> +                       if (insert_new_flow(tbl, &key, item_idx) ==
> +                               INVALID_ARRAY_INDEX) {
> +                               /*
> +                                * Fail to insert a new flow, so delete the
> +                                * stored packet.
> +                                */
> +                               delete_tcp_item(tbl->items, item_idx,
> &tbl->item_num,
> +                                       INVALID_ARRAY_INDEX);
> +                               return -1;
> +                       }
> +                       return 0;
> +               } else {
>                         return -1;
> -               if (insert_new_flow(tbl, &key, item_idx) ==
> -                       INVALID_ARRAY_INDEX) {
> -                       /*
> -                        * Fail to insert a new flow, so delete the
> -                        * stored packet.
> -                       */
> -                       delete_tcp_item(tbl->items, item_idx,
> &tbl->item_num, INVALID_ARRAY_INDEX);
> +               }
> +       } else {
> +               /*
> +                * Any packet with additional flags like PSH,FIN should be
> processed
> +                * and flushed immediately.
> +                * Hence marking the start time to 0, so that the packets
> will be flushed
> +                * immediately in timer mode.
> +                */
> +               if (tcp_hdr->tcp_flags &
> (RTE_TCP_ACK_FLAG|RTE_TCP_PSH_FLAG|RTE_TCP_FIN_FLAG)) {
> +                       if (tcp_hdr->tcp_flags != RTE_TCP_ACK_FLAG)
> +                               tbl->items[item_start_idx].start_time = 0;
> +                       return process_tcp_item(pkt, tcp_hdr, tcp_dl,
> tbl->items,
> +                                               tbl->flows[i].start_index,
> +                                               &tbl->item_num,
> tbl->max_item_num,
> +                                               ip_id, is_atomic,
> start_time);
> +               } else {
>                         return -1;
>                 }
> -               return 0;
>         }
>
> -       return process_tcp_item(pkt, tcp_hdr, tcp_dl, tbl->items,
> tbl->flows[i].start_index,
> -                                               &tbl->item_num,
> tbl->max_item_num,
> -                                               ip_id, is_atomic,
> start_time);
> +       return -1;
>  }
>
>  /*
> --
> 2.34.1
> >> Please ignore.
>
  

Patch

diff --git a/lib/gro/gro_tcp.h b/lib/gro/gro_tcp.h
index d926c4b8cc..137a03bc96 100644
--- a/lib/gro/gro_tcp.h
+++ b/lib/gro/gro_tcp.h
@@ -187,4 +187,15 @@  is_same_common_tcp_key(struct cmn_tcp_key *k1, struct cmn_tcp_key *k2)
 	return (!memcmp(k1, k2, sizeof(struct cmn_tcp_key)));
 }
 
+static inline void
+update_tcp_hdr_flags(struct rte_tcp_hdr *tcp_hdr, struct rte_mbuf *pkt)
+{
+	struct rte_tcp_hdr *merged_tcp_hdr;
+
+	merged_tcp_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_tcp_hdr *, pkt->l2_len +
+							pkt->l3_len);
+	merged_tcp_hdr->tcp_flags |= tcp_hdr->tcp_flags;
+
+}
+
 #endif
diff --git a/lib/gro/gro_tcp4.c b/lib/gro/gro_tcp4.c
index 6645de592b..8af5a8d8a9 100644
--- a/lib/gro/gro_tcp4.c
+++ b/lib/gro/gro_tcp4.c
@@ -126,6 +126,7 @@  gro_tcp4_reassemble(struct rte_mbuf *pkt,
 	uint32_t item_idx;
 	uint32_t i, max_flow_num, remaining_flow_num;
 	uint8_t find;
+	uint32_t item_start_idx;
 
 	/*
 	 * Don't process the packet whose TCP header length is greater
@@ -139,13 +140,6 @@  gro_tcp4_reassemble(struct rte_mbuf *pkt,
 	tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr + pkt->l3_len);
 	hdr_len = pkt->l2_len + pkt->l3_len + pkt->l4_len;
 
-	/*
-	 * Don't process the packet which has FIN, SYN, RST, PSH, URG, ECE
-	 * or CWR set.
-	 */
-	if (tcp_hdr->tcp_flags != RTE_TCP_ACK_FLAG)
-		return -1;
-
 	/* trim the tail padding bytes */
 	ip_tlen = rte_be_to_cpu_16(ipv4_hdr->total_length);
 	if (pkt->pkt_len > (uint32_t)(ip_tlen + pkt->l2_len))
@@ -183,6 +177,7 @@  gro_tcp4_reassemble(struct rte_mbuf *pkt,
 		if (tbl->flows[i].start_index != INVALID_ARRAY_INDEX) {
 			if (is_same_tcp4_flow(tbl->flows[i].key, key)) {
 				find = 1;
+				item_start_idx = tbl->flows[i].start_index;
 				break;
 			}
 			remaining_flow_num--;
@@ -190,28 +185,52 @@  gro_tcp4_reassemble(struct rte_mbuf *pkt,
 	}
 
 	if (find == 0) {
-		sent_seq = rte_be_to_cpu_32(tcp_hdr->sent_seq);
-		item_idx = insert_new_tcp_item(pkt, tbl->items, &tbl->item_num,
-						tbl->max_item_num, start_time,
-						INVALID_ARRAY_INDEX, sent_seq, ip_id,
-						is_atomic);
-		if (item_idx == INVALID_ARRAY_INDEX)
+		/*
+		 * Add new flow to the table only if contains ACK flag with data.
+		 * Do not add any packets with additional tcp flags to the GRO table
+		 */
+		if (tcp_hdr->tcp_flags == RTE_TCP_ACK_FLAG) {
+			sent_seq = rte_be_to_cpu_32(tcp_hdr->sent_seq);
+			item_idx = insert_new_tcp_item(pkt, tbl->items, &tbl->item_num,
+							tbl->max_item_num, start_time,
+							INVALID_ARRAY_INDEX, sent_seq, ip_id,
+							is_atomic);
+			if (item_idx == INVALID_ARRAY_INDEX)
+				return -1;
+			if (insert_new_flow(tbl, &key, item_idx) ==
+				INVALID_ARRAY_INDEX) {
+				/*
+				 * Fail to insert a new flow, so delete the
+				 * stored packet.
+				 */
+				delete_tcp_item(tbl->items, item_idx, &tbl->item_num,
+					INVALID_ARRAY_INDEX);
+				return -1;
+			}
+			return 0;
+		} else {
 			return -1;
-		if (insert_new_flow(tbl, &key, item_idx) ==
-			INVALID_ARRAY_INDEX) {
-			/*
-			 * Fail to insert a new flow, so delete the
-			 * stored packet.
-			*/
-			delete_tcp_item(tbl->items, item_idx, &tbl->item_num, INVALID_ARRAY_INDEX);
+		}
+	} else {
+		/*
+		 * Any packet with additional flags like PSH,FIN should be processed
+		 * and flushed immediately.
+		 * Hence marking the start time to 0, so that the packets will be flushed
+		 * immediately in timer mode.
+		 */
+		if (tcp_hdr->tcp_flags & (RTE_TCP_ACK_FLAG|RTE_TCP_PSH_FLAG|RTE_TCP_FIN_FLAG)) {
+			if (tcp_hdr->tcp_flags != RTE_TCP_ACK_FLAG)
+				tbl->items[item_start_idx].start_time = 0;
+			return process_tcp_item(pkt, tcp_hdr, tcp_dl, tbl->items,
+						tbl->flows[i].start_index,
+						&tbl->item_num, tbl->max_item_num,
+						ip_id, is_atomic, start_time);
+		} else {
 			return -1;
 		}
-		return 0;
 	}
 
-	return process_tcp_item(pkt, tcp_hdr, tcp_dl, tbl->items, tbl->flows[i].start_index,
-						&tbl->item_num, tbl->max_item_num,
-						ip_id, is_atomic, start_time);
+	return -1;
 }
 
 /*