From patchwork Tue Aug 1 05:44:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ganapati Kundapura X-Patchwork-Id: 129768 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B6EE242FAE; Tue, 1 Aug 2023 07:45:04 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8D2E740A7D; Tue, 1 Aug 2023 07:45:04 +0200 (CEST) Received: from mgamail.intel.com (unknown [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id D49B6400D5 for ; Tue, 1 Aug 2023 07:45:02 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690868703; x=1722404703; h=from:to:subject:date:message-id:mime-version: content-transfer-encoding; bh=AmQA/VNjzKPzfvY7Q7neh76iYk7CmyeSgxHDXBZ2ScQ=; b=naBa3+1DVEbLWKkT2CfmL/bSsI0ovUDl1QMQRUxHxtVPHEmLY5bYSMH6 A6RsqYhCAReY3ePNBnE1brw7QXPpDeVaKSh5NyMMhCmdsuey2qiaUVdrC kZLb//PGvKQMpkORq5ljwAhcP8nr9S/8k6ZrxvQBkp8nq0c44p4gAFegK jNhCHm9xiSOwdvQyCUx2yHmyi39XGu+RvnrPkeIY9PCCwA9lLlwnzMeRh Uos4qbzdwjVEh8zkCZwwRvctGj8vO/BXyYnjLzFedXQtK90jCHCBukjC3 YsbMfpZtBxEKFiTC257V+5+OKI46uexoW5QWB66ILQinnRJi7ZU3rHIAV g==; X-IronPort-AV: E=McAfee;i="6600,9927,10788"; a="400153125" X-IronPort-AV: E=Sophos;i="6.01,246,1684825200"; d="scan'208";a="400153125" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Jul 2023 22:45:01 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.01,202,1684825200"; d="scan'208";a="871941062" Received: from txandevlnx322.an.intel.com ([10.123.117.44]) by fmsmga001.fm.intel.com with ESMTP; 31 Jul 2023 22:45:03 -0700 From: Ganapati Kundapura To: jerinj@marvell.com, jay.jayatheerthan@intel.com, s.v.naga.harish.k@intel.com, abhinandan.gujjar@intel.com, dev@dpdk.org Subject: [PATCH v1] eventdev/crypto: flush ops when circ buffer is full Date: Tue, 1 Aug 2023 00:44:57 -0500 Message-Id: <20230801054457.1184208-1-ganapati.kundapura@intel.com> X-Mailer: git-send-email 2.23.0 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org crypto ops from the circ buffer are not getting flushed to crypto dev when crypto dev becomes busy and circ buffer gets full. This patch flushes ops from circ buffer when circ buffer is full instead of returning without flushing. Signed-off-by: Ganapati Kundapura diff --git a/lib/eventdev/rte_event_crypto_adapter.c b/lib/eventdev/rte_event_crypto_adapter.c index 52a28e5..1b435c9 100644 --- a/lib/eventdev/rte_event_crypto_adapter.c +++ b/lib/eventdev/rte_event_crypto_adapter.c @@ -248,9 +248,18 @@ eca_circular_buffer_flush_to_cdev(struct crypto_ops_circular_buffer *bufp, n = *tailp - *headp; else if (*tailp < *headp) n = bufp->size - *headp; - else { - *nb_ops_flushed = 0; - return 0; /* buffer empty */ + else { /* head == tail case */ + /* when head == tail, + * circ buff is either full(tail pointer roll over) or empty + */ + if (bufp->count != 0) { + /* circ buffer is full */ + n = bufp->count; + } else { + /* circ buffer is empty */ + *nb_ops_flushed = 0; + return 0; /* buffer empty */ + } } *nb_ops_flushed = rte_cryptodev_enqueue_burst(cdev_id, qp_id,