From patchwork Wed Nov 30 15:42:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ganapati Kundapura X-Patchwork-Id: 120356 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ED29CA00C2; Wed, 30 Nov 2022 16:43:02 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DEF54410D1; Wed, 30 Nov 2022 16:43:02 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 9FD2540A7F for ; Wed, 30 Nov 2022 16:43:01 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669822981; x=1701358981; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=VJVKP+AEBsh1WOR0p+3zhNAMG6iJJbXg44slkzgIpes=; b=Z85jtQooT7GWG2usvrLO+TejzgvH4ZjDCcv+gtayL7Gfz2b/kiJmHyed Tg/Kx4U5ohx3XA0Kk76KksaBRnXeGexEsxs5fJFYVrMdWcZY7aP6K3dvi V34mIPuR2ZNmkMqmplZf2rEXLqwGWqR/RYaMAIbkOSfY0A9+i0AtW8GBq r+unDoYBPqNs/+2FhgnjnZu1D7EHtywAXXYOm/XaERyWkNU36bLljFKLh s32x7jsRVkupQsOqwZUMMNfyviKhlmX+VPVcBqWh2HPzAy8B+Y4q75xA5 8jNNx9bfozlwyIlJ/IZo4bsN4h4JOOe/ZqD7XuzRDG6XD9c6EwBqVdQ3d Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10547"; a="342349260" X-IronPort-AV: E=Sophos;i="5.96,206,1665471600"; d="scan'208";a="342349260" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Nov 2022 07:43:00 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10547"; a="973145455" X-IronPort-AV: E=Sophos;i="5.96,206,1665471600"; d="scan'208";a="973145455" Received: from txandevlnx322.an.intel.com ([10.123.117.44]) by fmsmga005.fm.intel.com with ESMTP; 30 Nov 2022 07:43:00 -0800 From: Ganapati Kundapura To: dev@dpdk.org, jerinj@marvell.com, s.v.naga.harish.k@intel.com, abhinandan.gujjar@intel.com Cc: jay.jayatheerthan@intel.com Subject: [PATCH v1] eventdev/crypto: overflow in circular buffer Date: Wed, 30 Nov 2022 09:42:58 -0600 Message-Id: <20221130154258.1694578-1-ganapati.kundapura@intel.com> X-Mailer: git-send-email 2.23.0 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Crypto adapter checks CPM backpressure once in enq_run() This leads to buffer overflow if some ops failed to flush to cryptodev. Checked CPM backpressure for every iteration in enq_run() Signed-off-by: Ganapati Kundapura diff --git a/lib/eventdev/rte_event_crypto_adapter.c b/lib/eventdev/rte_event_crypto_adapter.c index 3c585d7..e7e9455 100644 --- a/lib/eventdev/rte_event_crypto_adapter.c +++ b/lib/eventdev/rte_event_crypto_adapter.c @@ -585,14 +585,15 @@ eca_crypto_adapter_enq_run(struct event_crypto_adapter *adapter, if (adapter->mode == RTE_EVENT_CRYPTO_ADAPTER_OP_NEW) return 0; - if (unlikely(adapter->stop_enq_to_cryptodev)) { - nb_enqueued += eca_crypto_enq_flush(adapter); + for (nb_enq = 0; nb_enq < max_enq; nb_enq += n) { - if (unlikely(adapter->stop_enq_to_cryptodev)) - goto skip_event_dequeue_burst; - } + if (unlikely(adapter->stop_enq_to_cryptodev)) { + nb_enqueued += eca_crypto_enq_flush(adapter); + + if (unlikely(adapter->stop_enq_to_cryptodev)) + break; + } - for (nb_enq = 0; nb_enq < max_enq; nb_enq += n) { stats->event_poll_count++; n = rte_event_dequeue_burst(event_dev_id, event_port_id, ev, BATCH_SIZE, 0); @@ -603,8 +604,6 @@ eca_crypto_adapter_enq_run(struct event_crypto_adapter *adapter, nb_enqueued += eca_enq_to_cryptodev(adapter, ev, n); } -skip_event_dequeue_burst: - if ((++adapter->transmit_loop_count & (CRYPTO_ENQ_FLUSH_THRESHOLD - 1)) == 0) { nb_enqueued += eca_crypto_enq_flush(adapter);