net/iavf: fix frequent command interaction leads to high cpu

Message ID 20210911040221.3681-1-chenqiming_huawei@163.com (mailing list archive)
State Accepted, archived
Delegated to: Qi Zhang
Headers
Series net/iavf: fix frequent command interaction leads to high cpu |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/github-robot: build success github build: passed
ci/iol-x86_64-unit-testing success Testing PASS
ci/iol-x86_64-compile-testing success Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-aarch64-compile-testing success Testing PASS
ci/Intel-compilation success Compilation OK
ci/intel-Testing fail Testing issues
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-intel-Performance success Performance Testing PASS

Commit Message

Qiming Chen Sept. 11, 2021, 4:02 a.m. UTC
  There is currently a scenario test, which will continuously obtain port
statistics, causing the CPU usage to soar, which does not meet the
demand. After positioning analysis, it is found that the vf and pf
command interaction is completed through the iavf_execute_vf_cmd function.
After the message is sent, it needs to wait for the interrupt thread to
obtain the response from the PF. For the data, the rte_delay_ms interface
is used here to wait, but the CPU will not be released during the waiting
period of this interface, which will cause the statistics to keep
occupying the CPU. This is also the root cause of the soaring cpu.
The command interaction should belong to the control plane, and there will
not be too high requirements for performance. It is recommended to wait
for the interface iavf_msec_delay to complete without taking up the CPU
time.

Fixes: 22b123a36d07 ("net/avf: initialize PMD")
Cc: stable@dpdk.org

Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
---
 drivers/net/iavf/iavf_vchnl.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
  

Comments

Qi Zhang Sept. 24, 2021, 5:42 a.m. UTC | #1
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Qiming Chen
> Sent: Saturday, September 11, 2021 12:02 PM
> To: dev@dpdk.org
> Cc: Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> Qiming Chen <chenqiming_huawei@163.com>; stable@dpdk.org
> Subject: [dpdk-dev] [PATCH] net/iavf: fix frequent command interaction leads
> to high cpu
> 
> There is currently a scenario test, which will continuously obtain port statistics,
> causing the CPU usage to soar, which does not meet the demand. After
> positioning analysis, it is found that the vf and pf command interaction is
> completed through the iavf_execute_vf_cmd function.
> After the message is sent, it needs to wait for the interrupt thread to obtain
> the response from the PF. For the data, the rte_delay_ms interface is used here
> to wait, but the CPU will not be released during the waiting period of this
> interface, which will cause the statistics to keep occupying the CPU. This is also
> the root cause of the soaring cpu.
> The command interaction should belong to the control plane, and there will
> not be too high requirements for performance. It is recommended to wait for
> the interface iavf_msec_delay to complete without taking up the CPU time.
> 
> Fixes: 22b123a36d07 ("net/avf: initialize PMD")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>

Acked-by: Qi Zhang <qi.z.zhang@intel.com>

Applied to dpdk-next-net-intel.

Thanks
Qi
> ---
>  drivers/net/iavf/iavf_vchnl.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index
> 06dc663947..2f39c2077c 100644
> --- a/drivers/net/iavf/iavf_vchnl.c
> +++ b/drivers/net/iavf/iavf_vchnl.c
> @@ -181,7 +181,7 @@ iavf_execute_vf_cmd(struct iavf_adapter *adapter,
> struct iavf_cmd_info *args)
>  						   args->out_buffer);
>  			if (result == IAVF_MSG_CMD)
>  				break;
> -			rte_delay_ms(ASQ_DELAY_MS);
> +			iavf_msec_delay(ASQ_DELAY_MS);
>  		} while (i++ < MAX_TRY_TIMES);
>  		if (i >= MAX_TRY_TIMES ||
>  		    vf->cmd_retval != VIRTCHNL_STATUS_SUCCESS) { @@ -207,7
> +207,7 @@ iavf_execute_vf_cmd(struct iavf_adapter *adapter, struct
> iavf_cmd_info *args)
>  				err = -1;
>  				break;
>  			}
> -			rte_delay_ms(ASQ_DELAY_MS);
> +			iavf_msec_delay(ASQ_DELAY_MS);
>  			/* If don't read msg or read sys event, continue */
>  		} while (i++ < MAX_TRY_TIMES);
>  		if (i >= MAX_TRY_TIMES ||
> @@ -225,7 +225,7 @@ iavf_execute_vf_cmd(struct iavf_adapter *adapter,
> struct iavf_cmd_info *args)
>  		do {
>  			if (vf->pend_cmd == VIRTCHNL_OP_UNKNOWN)
>  				break;
> -			rte_delay_ms(ASQ_DELAY_MS);
> +			iavf_msec_delay(ASQ_DELAY_MS);
>  			/* If don't read msg or read sys event, continue */
>  		} while (i++ < MAX_TRY_TIMES);
> 
> --
> 2.30.1.windows.1
  

Patch

diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 06dc663947..2f39c2077c 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -181,7 +181,7 @@  iavf_execute_vf_cmd(struct iavf_adapter *adapter, struct iavf_cmd_info *args)
 						   args->out_buffer);
 			if (result == IAVF_MSG_CMD)
 				break;
-			rte_delay_ms(ASQ_DELAY_MS);
+			iavf_msec_delay(ASQ_DELAY_MS);
 		} while (i++ < MAX_TRY_TIMES);
 		if (i >= MAX_TRY_TIMES ||
 		    vf->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
@@ -207,7 +207,7 @@  iavf_execute_vf_cmd(struct iavf_adapter *adapter, struct iavf_cmd_info *args)
 				err = -1;
 				break;
 			}
-			rte_delay_ms(ASQ_DELAY_MS);
+			iavf_msec_delay(ASQ_DELAY_MS);
 			/* If don't read msg or read sys event, continue */
 		} while (i++ < MAX_TRY_TIMES);
 		if (i >= MAX_TRY_TIMES ||
@@ -225,7 +225,7 @@  iavf_execute_vf_cmd(struct iavf_adapter *adapter, struct iavf_cmd_info *args)
 		do {
 			if (vf->pend_cmd == VIRTCHNL_OP_UNKNOWN)
 				break;
-			rte_delay_ms(ASQ_DELAY_MS);
+			iavf_msec_delay(ASQ_DELAY_MS);
 			/* If don't read msg or read sys event, continue */
 		} while (i++ < MAX_TRY_TIMES);