From patchwork Thu Apr 13 09:45:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenjing Qiao X-Patchwork-Id: 126018 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CE65942931; Thu, 13 Apr 2023 11:52:08 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D555D42D8B; Thu, 13 Apr 2023 11:50:45 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 4C98542D0E for ; Thu, 13 Apr 2023 11:50:44 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1681379444; x=1712915444; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eAM6Cws01kD4xK3/PFvH/Ake/7Q0Lm96Gu58dzqsv3M=; b=Kjpy3zecOD7LK0z+IygGKhbx2KX0Jdzl6WkeRGYu0IzuEDTKRGI8OiGP k5OsIzuipHPsBM9PiREJcuxUw5qY4bhyXy1mY+C0HsoBgW4LOe+2X/RQx H7DvogsvY9QKSB8ABEhiB+vmoJ4tExSC3+bNf31ZYhog0dE1031mp75C0 v8lPRZMJJRNR8us0odNT51mUDOqYsst07/NTJAeLIPuaLYcCHuYVRIdVF u/6A/yinALIALU8Xi9wW2Sl2HTO34ggUC+u1eSEm8zbbanro0mAVSrlHm DrR3DdsE+4WqKOBOhnVL/Dy5w1D9LgB+jBeboFEyhb9NftKkHlv2CCmfT g==; X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="409290580" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="409290580" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2023 02:50:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="778699385" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="778699385" Received: from dpdk-wenjing-01.sh.intel.com ([10.67.119.244]) by FMSMGA003.fm.intel.com with ESMTP; 13 Apr 2023 02:50:41 -0700 From: Wenjing Qiao To: jingjing.wu@intel.com, beilei.xing@intel.com, qi.z.zhang@intel.com Cc: dev@dpdk.org, Wenjing Qiao , NorbertX Ciosek Subject: [PATCH 16/18] common/idpf: add func to clean all DESCs on controlq Date: Thu, 13 Apr 2023 05:45:00 -0400 Message-Id: <20230413094502.1714755-17-wenjing.qiao@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230413094502.1714755-1-wenjing.qiao@intel.com> References: <20230413094502.1714755-1-wenjing.qiao@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add 'idpf_ctlq_clean_sq_force' which will clean all descriptors on given control queue. It is needed in case control plane is not running and we need to do proper driver cleanup. Signed-off-by: NorbertX Ciosek Signed-off-by: Wenjing Qiao --- drivers/common/idpf/base/idpf_controlq.c | 56 ++++++++++++++++++-- drivers/common/idpf/base/idpf_controlq_api.h | 4 ++ 2 files changed, 55 insertions(+), 5 deletions(-) diff --git a/drivers/common/idpf/base/idpf_controlq.c b/drivers/common/idpf/base/idpf_controlq.c index 8381e4000f..9374fce71e 100644 --- a/drivers/common/idpf/base/idpf_controlq.c +++ b/drivers/common/idpf/base/idpf_controlq.c @@ -386,13 +386,15 @@ int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq, } /** - * idpf_ctlq_clean_sq - reclaim send descriptors on HW write back for the - * requested queue + * __idpf_ctlq_clean_sq - helper function to reclaim descriptors on HW write + * back for the requested queue * @cq: pointer to the specific Control queue * @clean_count: (input|output) number of descriptors to clean as input, and * number of descriptors actually cleaned as output * @msg_status: (output) pointer to msg pointer array to be populated; needs * to be allocated by caller + * @force: (input) clean descriptors which were not done yet. Use with caution + * in kernel mode only * * Returns an array of message pointers associated with the cleaned * descriptors. The pointers are to the original ctlq_msgs sent on the cleaned @@ -400,8 +402,8 @@ int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq, * to send will have a non-zero status. The caller is expected to free original * ctlq_msgs and free or reuse the DMA buffers. */ -int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count, - struct idpf_ctlq_msg *msg_status[]) +static int __idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count, + struct idpf_ctlq_msg *msg_status[], bool force) { struct idpf_ctlq_desc *desc; u16 i = 0, num_to_clean; @@ -425,7 +427,7 @@ int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count, for (i = 0; i < num_to_clean; i++) { /* Fetch next descriptor and check if marked as done */ desc = IDPF_CTLQ_DESC(cq, ntc); - if (!(LE16_TO_CPU(desc->flags) & IDPF_CTLQ_FLAG_DD)) + if (!force && !(LE16_TO_CPU(desc->flags) & IDPF_CTLQ_FLAG_DD)) break; desc_err = LE16_TO_CPU(desc->ret_val); @@ -435,6 +437,8 @@ int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count, } msg_status[i] = cq->bi.tx_msg[ntc]; + if (!msg_status[i]) + break; msg_status[i]->status = desc_err; cq->bi.tx_msg[ntc] = NULL; @@ -457,6 +461,48 @@ int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count, return ret; } +/** + * idpf_ctlq_clean_sq_force - reclaim all descriptors on HW write back for the + * requested queue. Use only in kernel mode. + * @cq: pointer to the specific Control queue + * @clean_count: (input|output) number of descriptors to clean as input, and + * number of descriptors actually cleaned as output + * @msg_status: (output) pointer to msg pointer array to be populated; needs + * to be allocated by caller + * + * Returns an array of message pointers associated with the cleaned + * descriptors. The pointers are to the original ctlq_msgs sent on the cleaned + * descriptors. The status will be returned for each; any messages that failed + * to send will have a non-zero status. The caller is expected to free original + * ctlq_msgs and free or reuse the DMA buffers. + */ +int idpf_ctlq_clean_sq_force(struct idpf_ctlq_info *cq, u16 *clean_count, + struct idpf_ctlq_msg *msg_status[]) +{ + return __idpf_ctlq_clean_sq(cq, clean_count, msg_status, true); +} + +/** + * idpf_ctlq_clean_sq - reclaim send descriptors on HW write back for the + * requested queue + * @cq: pointer to the specific Control queue + * @clean_count: (input|output) number of descriptors to clean as input, and + * number of descriptors actually cleaned as output + * @msg_status: (output) pointer to msg pointer array to be populated; needs + * to be allocated by caller + * + * Returns an array of message pointers associated with the cleaned + * descriptors. The pointers are to the original ctlq_msgs sent on the cleaned + * descriptors. The status will be returned for each; any messages that failed + * to send will have a non-zero status. The caller is expected to free original + * ctlq_msgs and free or reuse the DMA buffers. + */ +int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count, + struct idpf_ctlq_msg *msg_status[]) +{ + return __idpf_ctlq_clean_sq(cq, clean_count, msg_status, false); +} + /** * idpf_ctlq_post_rx_buffs - post buffers to descriptor ring * @hw: pointer to hw struct diff --git a/drivers/common/idpf/base/idpf_controlq_api.h b/drivers/common/idpf/base/idpf_controlq_api.h index 80be282b42..a00faac05f 100644 --- a/drivers/common/idpf/base/idpf_controlq_api.h +++ b/drivers/common/idpf/base/idpf_controlq_api.h @@ -191,6 +191,10 @@ int idpf_ctlq_send(struct idpf_hw *hw, int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg, struct idpf_ctlq_msg *q_msg); +/* Reclaims all descriptors on HW write back */ +int idpf_ctlq_clean_sq_force(struct idpf_ctlq_info *cq, u16 *clean_count, + struct idpf_ctlq_msg *msg_status[]); + /* Reclaims send descriptors on HW write back */ int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count, struct idpf_ctlq_msg *msg_status[]);