From patchwork Wed Dec 7 06:49:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ganapati Kundapura X-Patchwork-Id: 120520 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C2D66A00C3; Wed, 7 Dec 2022 07:50:11 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7366442D30; Wed, 7 Dec 2022 07:49:54 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 8522B410D7 for ; Wed, 7 Dec 2022 07:49:50 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670395790; x=1701931790; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GGeXN+yAlZp+uv+5QRBNjYDd5FOnn7ZgY/avS6FyPKs=; b=YgbtGXbfqbxgou7qZ8wL/oYtw9C1yvYAnRyB7d+HyJPvjssdzdWmyvsi JrllYYjdDZgCC9iPPoMcncRG4bE3SOqs+6sbkFtveinVLJ1hOPYPb8H0+ c8oF2U96Ra4s9ZNyhgudKIp4X/zji0CsvqrJ4oNjgHKvqGhgjoLrJDHn4 PT2DTmP9WScUiCfXLTGgbbHzBAalhGbApawfSdEuAZV0nAmM4tBzMfjZ/ tPW0C1gVaHyBGAG63UtXm1vqiCvVHg/2H7Dv8+64CWiSPQV5bbGD5VZaP UXDPIzKRKeK310fpHSN5rk4SKMdh+Erziqvx+SrPurleEHzgczzYRDqvr w==; X-IronPort-AV: E=McAfee;i="6500,9779,10553"; a="304446596" X-IronPort-AV: E=Sophos;i="5.96,223,1665471600"; d="scan'208";a="304446596" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2022 22:49:49 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10553"; a="975352947" X-IronPort-AV: E=Sophos;i="5.96,223,1665471600"; d="scan'208";a="975352947" Received: from txandevlnx322.an.intel.com ([10.123.117.44]) by fmsmga005.fm.intel.com with ESMTP; 06 Dec 2022 22:49:48 -0800 From: Ganapati Kundapura To: dev@dpdk.org, jerinj@marvell.com, s.v.naga.harish.k@intel.com, abhinandan.gujjar@intel.com Cc: jay.jayatheerthan@intel.com, vfialko@marvell.com Subject: [PATCH v3 4/5] eventdev/crypto: fix overflow in circular buffer Date: Wed, 7 Dec 2022 00:49:44 -0600 Message-Id: <20221207064945.1665368-4-ganapati.kundapura@intel.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20221207064945.1665368-1-ganapati.kundapura@intel.com> References: <20221201064652.1885734-1-ganapati.kundapura@intel.com> <20221207064945.1665368-1-ganapati.kundapura@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In case of crypto enqueue failures, even though backpressure flag is set to stop further dequeue from eventdev, the current logic does not stop dequeueing events for max_nb events. This is fixed by checking the backpressure just before dequeuing events from event device. Fixes: 7901eac3409a ("eventdev: add crypto adapter implementation") Signed-off-by: Ganapati Kundapura Acked-by: Abhinandan Gujjar --- v3: * Updated commit message v2: * Updated subject line in commit message diff --git a/lib/eventdev/rte_event_crypto_adapter.c b/lib/eventdev/rte_event_crypto_adapter.c index c08984b..31b8255 100644 --- a/lib/eventdev/rte_event_crypto_adapter.c +++ b/lib/eventdev/rte_event_crypto_adapter.c @@ -573,14 +573,15 @@ eca_crypto_adapter_enq_run(struct event_crypto_adapter *adapter, if (adapter->mode == RTE_EVENT_CRYPTO_ADAPTER_OP_NEW) return 0; - if (unlikely(adapter->stop_enq_to_cryptodev)) { - nb_enqueued += eca_crypto_enq_flush(adapter); + for (nb_enq = 0; nb_enq < max_enq; nb_enq += n) { - if (unlikely(adapter->stop_enq_to_cryptodev)) - goto skip_event_dequeue_burst; - } + if (unlikely(adapter->stop_enq_to_cryptodev)) { + nb_enqueued += eca_crypto_enq_flush(adapter); + + if (unlikely(adapter->stop_enq_to_cryptodev)) + break; + } - for (nb_enq = 0; nb_enq < max_enq; nb_enq += n) { stats->event_poll_count++; n = rte_event_dequeue_burst(event_dev_id, event_port_id, ev, BATCH_SIZE, 0); @@ -591,8 +592,6 @@ eca_crypto_adapter_enq_run(struct event_crypto_adapter *adapter, nb_enqueued += eca_enq_to_cryptodev(adapter, ev, n); } -skip_event_dequeue_burst: - if ((++adapter->transmit_loop_count & (CRYPTO_ENQ_FLUSH_THRESHOLD - 1)) == 0) { nb_enqueued += eca_crypto_enq_flush(adapter);