[v3,4/5] eventdev/crypto: fix overflow in circular buffer

Message ID 20221207064945.1665368-4-ganapati.kundapura@intel.com (mailing list archive)
State Accepted, archived
Delegated to: Jerin Jacob
Headers
Series [v3,1/5] eventdev/event_crypto: process event port's impl rel cap |

Checks

Context Check Description
ci/checkpatch success coding style OK

Commit Message

Ganapati Kundapura Dec. 7, 2022, 6:49 a.m. UTC
  In case of crypto enqueue failures, even though backpressure
flag is set to stop further dequeue from eventdev, the current
logic does not stop dequeueing events for max_nb events.

This is fixed by checking the backpressure just before
dequeuing events from event device.

Fixes: 7901eac3409a ("eventdev: add crypto adapter implementation")

Signed-off-by: Ganapati Kundapura <ganapati.kundapura@intel.com>
---
v3:
* Updated commit message

v2:
* Updated subject line in commit message
  

Comments

Gujjar, Abhinandan S Dec. 7, 2022, 7:04 a.m. UTC | #1
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>

> -----Original Message-----
> From: Kundapura, Ganapati <ganapati.kundapura@intel.com>
> Sent: Wednesday, December 7, 2022 12:20 PM
> To: dev@dpdk.org; jerinj@marvell.com; Naga Harish K, S V
> <s.v.naga.harish.k@intel.com>; Gujjar, Abhinandan S
> <abhinandan.gujjar@intel.com>
> Cc: Jayatheerthan, Jay <jay.jayatheerthan@intel.com>; vfialko@marvell.com
> Subject: [PATCH v3 4/5] eventdev/crypto: fix overflow in circular buffer
> 
> In case of crypto enqueue failures, even though backpressure flag is set to
> stop further dequeue from eventdev, the current logic does not stop
> dequeueing events for max_nb events.
> 
> This is fixed by checking the backpressure just before dequeuing events from
> event device.
> 
> Fixes: 7901eac3409a ("eventdev: add crypto adapter implementation")
> 
> Signed-off-by: Ganapati Kundapura <ganapati.kundapura@intel.com>
> ---
> v3:
> * Updated commit message
> 
> v2:
> * Updated subject line in commit message
> 
> diff --git a/lib/eventdev/rte_event_crypto_adapter.c
> b/lib/eventdev/rte_event_crypto_adapter.c
> index c08984b..31b8255 100644
> --- a/lib/eventdev/rte_event_crypto_adapter.c
> +++ b/lib/eventdev/rte_event_crypto_adapter.c
> @@ -573,14 +573,15 @@ eca_crypto_adapter_enq_run(struct
> event_crypto_adapter *adapter,
>  	if (adapter->mode == RTE_EVENT_CRYPTO_ADAPTER_OP_NEW)
>  		return 0;
> 
> -	if (unlikely(adapter->stop_enq_to_cryptodev)) {
> -		nb_enqueued += eca_crypto_enq_flush(adapter);
> +	for (nb_enq = 0; nb_enq < max_enq; nb_enq += n) {
> 
> -		if (unlikely(adapter->stop_enq_to_cryptodev))
> -			goto skip_event_dequeue_burst;
> -	}
> +		if (unlikely(adapter->stop_enq_to_cryptodev)) {
> +			nb_enqueued += eca_crypto_enq_flush(adapter);
> +
> +			if (unlikely(adapter->stop_enq_to_cryptodev))
> +				break;
> +		}
> 
> -	for (nb_enq = 0; nb_enq < max_enq; nb_enq += n) {
>  		stats->event_poll_count++;
>  		n = rte_event_dequeue_burst(event_dev_id,
>  					    event_port_id, ev, BATCH_SIZE, 0);
> @@ -591,8 +592,6 @@ eca_crypto_adapter_enq_run(struct
> event_crypto_adapter *adapter,
>  		nb_enqueued += eca_enq_to_cryptodev(adapter, ev, n);
>  	}
> 
> -skip_event_dequeue_burst:
> -
>  	if ((++adapter->transmit_loop_count &
>  		(CRYPTO_ENQ_FLUSH_THRESHOLD - 1)) == 0) {
>  		nb_enqueued += eca_crypto_enq_flush(adapter);
> --
> 2.6.4
  

Patch

diff --git a/lib/eventdev/rte_event_crypto_adapter.c b/lib/eventdev/rte_event_crypto_adapter.c
index c08984b..31b8255 100644
--- a/lib/eventdev/rte_event_crypto_adapter.c
+++ b/lib/eventdev/rte_event_crypto_adapter.c
@@ -573,14 +573,15 @@  eca_crypto_adapter_enq_run(struct event_crypto_adapter *adapter,
 	if (adapter->mode == RTE_EVENT_CRYPTO_ADAPTER_OP_NEW)
 		return 0;
 
-	if (unlikely(adapter->stop_enq_to_cryptodev)) {
-		nb_enqueued += eca_crypto_enq_flush(adapter);
+	for (nb_enq = 0; nb_enq < max_enq; nb_enq += n) {
 
-		if (unlikely(adapter->stop_enq_to_cryptodev))
-			goto skip_event_dequeue_burst;
-	}
+		if (unlikely(adapter->stop_enq_to_cryptodev)) {
+			nb_enqueued += eca_crypto_enq_flush(adapter);
+
+			if (unlikely(adapter->stop_enq_to_cryptodev))
+				break;
+		}
 
-	for (nb_enq = 0; nb_enq < max_enq; nb_enq += n) {
 		stats->event_poll_count++;
 		n = rte_event_dequeue_burst(event_dev_id,
 					    event_port_id, ev, BATCH_SIZE, 0);
@@ -591,8 +592,6 @@  eca_crypto_adapter_enq_run(struct event_crypto_adapter *adapter,
 		nb_enqueued += eca_enq_to_cryptodev(adapter, ev, n);
 	}
 
-skip_event_dequeue_burst:
-
 	if ((++adapter->transmit_loop_count &
 		(CRYPTO_ENQ_FLUSH_THRESHOLD - 1)) == 0) {
 		nb_enqueued += eca_crypto_enq_flush(adapter);