[v3,1/5] latencystats: replace use of VLA

Message ID 20240417170908.76701-2-stephen@networkplumber.org (mailing list archive)
State Superseded
Delegated to: Thomas Monjalon
Headers
Series latencystats: cleanups |

Checks

Context Check Description
ci/checkpatch success coding style OK

Commit Message

Stephen Hemminger April 17, 2024, 5:07 p.m. UTC
  The temporary array latencystats is not needed if the algorithm
is converted into one pass.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 lib/latencystats/rte_latencystats.c | 31 +++++++++++++++--------------
 1 file changed, 16 insertions(+), 15 deletions(-)
  

Comments

Morten Brørup April 17, 2024, 6:03 p.m. UTC | #1
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Wednesday, 17 April 2024 19.07
> 
> The temporary array latencystats is not needed if the algorithm
> is converted into one pass.
> 
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
>  lib/latencystats/rte_latencystats.c | 31 +++++++++++++++--------------
>  1 file changed, 16 insertions(+), 15 deletions(-)
> 
> diff --git a/lib/latencystats/rte_latencystats.c
> b/lib/latencystats/rte_latencystats.c
> index 4ea9b0d75b..9b345bfb33 100644
> --- a/lib/latencystats/rte_latencystats.c
> +++ b/lib/latencystats/rte_latencystats.c
> @@ -157,9 +157,9 @@ calc_latency(uint16_t pid __rte_unused,
>  		uint16_t nb_pkts,
>  		void *_ __rte_unused)
>  {
> -	unsigned int i, cnt = 0;
> +	unsigned int i;
>  	uint64_t now;
> -	float latency[nb_pkts];
> +	float latency;
>  	static float prev_latency;
>  	/*
>  	 * Alpha represents degree of weighting decrease in EWMA,
> @@ -169,13 +169,14 @@ calc_latency(uint16_t pid __rte_unused,
>  	const float alpha = 0.2;
> 
>  	now = rte_rdtsc();
> -	for (i = 0; i < nb_pkts; i++) {
> -		if (pkts[i]->ol_flags & timestamp_dynflag)
> -			latency[cnt++] = now - *timestamp_dynfield(pkts[i]);
> -	}
> 
>  	rte_spinlock_lock(&glob_stats->lock);
> -	for (i = 0; i < cnt; i++) {
> +	for (i = 0; i < nb_pkts; i++) {
> +		if (!(pkts[i]->ol_flags & timestamp_dynflag))
> +			continue;
> +
> +		latency = now - *timestamp_dynfield(pkts[i]);
> +
>  		/*
>  		 * The jitter is calculated as statistical mean of
> interpacket
>  		 * delay variation. The "jitter estimate" is computed by
> taking
> @@ -187,22 +188,22 @@ calc_latency(uint16_t pid __rte_unused,
>  		 * Reference: Calculated as per RFC 5481, sec 4.1,
>  		 * RFC 3393 sec 4.5, RFC 1889 sec.
>  		 */
> -		glob_stats->jitter +=  (fabsf(prev_latency - latency[i])
> +		glob_stats->jitter +=  (fabsf(prev_latency - latency)
>  					- glob_stats->jitter)/16;
>  		if (glob_stats->min_latency == 0)

You could add unlikely() to this comparison. It should only happen once in a lifetime.

> -			glob_stats->min_latency = latency[i];
> -		else if (latency[i] < glob_stats->min_latency)
> -			glob_stats->min_latency = latency[i];
> -		else if (latency[i] > glob_stats->max_latency)
> -			glob_stats->max_latency = latency[i];
> +			glob_stats->min_latency = latency;

In theory, glob_stats->max_latency should also be initialized with the first sample value here.

> +		else if (latency < glob_stats->min_latency)
> +			glob_stats->min_latency = latency;
> +		else if (latency > glob_stats->max_latency)
> +			glob_stats->max_latency = latency;

The min/max comparisons are also unlikely.

>  		/*
>  		 * The average latency is measured using exponential moving
>  		 * average, i.e. using EWMA
>  		 * https://en.wikipedia.org/wiki/Moving_average
>  		 */
>  		glob_stats->avg_latency +=
> -			alpha * (latency[i] - glob_stats->avg_latency);
> -		prev_latency = latency[i];
> +			alpha * (latency - glob_stats->avg_latency);
> +		prev_latency = latency;
>  	}
>  	rte_spinlock_unlock(&glob_stats->lock);
> 
> --
> 2.43.0

With or without suggested changes,
Acked-by: Morten Brørup <mb@smartsharesystems.com>
  
Stephen Hemminger April 17, 2024, 6:13 p.m. UTC | #2
On Wed, 17 Apr 2024 20:03:55 +0200
Morten Brørup <mb@smartsharesystems.com> wrote:

> In theory, glob_stats->max_latency should also be initialized with the first sample value here.

The last patch does that. 
It was merge of two patches, will split up in next version.
  
Tyler Retzlaff April 18, 2024, midnight UTC | #3
On Wed, Apr 17, 2024 at 10:07:23AM -0700, Stephen Hemminger wrote:
> The temporary array latencystats is not needed if the algorithm
> is converted into one pass.
> 
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---

if this series is merged first, my series here
https://patchwork.dpdk.org/project/dpdk/patch/1713397319-26135-6-git-send-email-roretzla@linux.microsoft.com/
can just have the latencystats patch dropped, since this series carries
the preferred change.

thanks Stephen!

Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
  

Patch

diff --git a/lib/latencystats/rte_latencystats.c b/lib/latencystats/rte_latencystats.c
index 4ea9b0d75b..9b345bfb33 100644
--- a/lib/latencystats/rte_latencystats.c
+++ b/lib/latencystats/rte_latencystats.c
@@ -157,9 +157,9 @@  calc_latency(uint16_t pid __rte_unused,
 		uint16_t nb_pkts,
 		void *_ __rte_unused)
 {
-	unsigned int i, cnt = 0;
+	unsigned int i;
 	uint64_t now;
-	float latency[nb_pkts];
+	float latency;
 	static float prev_latency;
 	/*
 	 * Alpha represents degree of weighting decrease in EWMA,
@@ -169,13 +169,14 @@  calc_latency(uint16_t pid __rte_unused,
 	const float alpha = 0.2;
 
 	now = rte_rdtsc();
-	for (i = 0; i < nb_pkts; i++) {
-		if (pkts[i]->ol_flags & timestamp_dynflag)
-			latency[cnt++] = now - *timestamp_dynfield(pkts[i]);
-	}
 
 	rte_spinlock_lock(&glob_stats->lock);
-	for (i = 0; i < cnt; i++) {
+	for (i = 0; i < nb_pkts; i++) {
+		if (!(pkts[i]->ol_flags & timestamp_dynflag))
+			continue;
+
+		latency = now - *timestamp_dynfield(pkts[i]);
+
 		/*
 		 * The jitter is calculated as statistical mean of interpacket
 		 * delay variation. The "jitter estimate" is computed by taking
@@ -187,22 +188,22 @@  calc_latency(uint16_t pid __rte_unused,
 		 * Reference: Calculated as per RFC 5481, sec 4.1,
 		 * RFC 3393 sec 4.5, RFC 1889 sec.
 		 */
-		glob_stats->jitter +=  (fabsf(prev_latency - latency[i])
+		glob_stats->jitter +=  (fabsf(prev_latency - latency)
 					- glob_stats->jitter)/16;
 		if (glob_stats->min_latency == 0)
-			glob_stats->min_latency = latency[i];
-		else if (latency[i] < glob_stats->min_latency)
-			glob_stats->min_latency = latency[i];
-		else if (latency[i] > glob_stats->max_latency)
-			glob_stats->max_latency = latency[i];
+			glob_stats->min_latency = latency;
+		else if (latency < glob_stats->min_latency)
+			glob_stats->min_latency = latency;
+		else if (latency > glob_stats->max_latency)
+			glob_stats->max_latency = latency;
 		/*
 		 * The average latency is measured using exponential moving
 		 * average, i.e. using EWMA
 		 * https://en.wikipedia.org/wiki/Moving_average
 		 */
 		glob_stats->avg_latency +=
-			alpha * (latency[i] - glob_stats->avg_latency);
-		prev_latency = latency[i];
+			alpha * (latency - glob_stats->avg_latency);
+		prev_latency = latency;
 	}
 	rte_spinlock_unlock(&glob_stats->lock);