From patchwork Thu Feb 23 03:16:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 124436 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E72DD41D3E; Thu, 23 Feb 2023 04:45:50 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C9E80431E9; Thu, 23 Feb 2023 04:45:50 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id EA99C40DF6; Thu, 23 Feb 2023 04:45:48 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1677123949; x=1708659949; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=m6llkVxol49ZOoME6WB83bXMz+utZIZ0XVcXkyKhh6Y=; b=l83mLfOHYUNnJYTiljDtf4Q+ro4Khb0sACchmj5rKVU0QRWpg7riBV5i 4Fn7PXkXwtJm/a9FxD/28xXTZlmr5Pi0UaqPjtwAUoIqdedLlq2PFT3/N epH1NQPzaTK67E7gTjZ3gSDP5GP8RRxl1HAoe2wCrcSmKA67HD26Jz9Qr ZZARlGJXrAnEFx9BaFz/TN0tkpTPXk6tRXOS/CGcITNT1xMWDeA8/pJls bXeG359jgnwQ8dlWNZgf6r8nubU9gj89BXxkZ8bDskVdtYpFRjrewyvSG CgZVyRaPDZfcRsxovgzj3y/8AdvAoPrOFw/hVbnERZNkGTG9rXV2su9Y8 Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10629"; a="331777117" X-IronPort-AV: E=Sophos;i="5.97,320,1669104000"; d="scan'208";a="331777117" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Feb 2023 19:45:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10629"; a="672320798" X-IronPort-AV: E=Sophos;i="5.97,320,1669104000"; d="scan'208";a="672320798" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga002.jf.intel.com with ESMTP; 22 Feb 2023 19:45:40 -0800 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, Beilei Xing , stable@dpdk.org Subject: [PATCH] common/idpf: fix Rx queue configuration Date: Thu, 23 Feb 2023 03:16:46 +0000 Message-Id: <20230223031646.11222-1-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing IDPF PMD enables 2 buffer queues by default. According to the data sheet, if there is a second buffer queue, the second buffer queue is valid only if bugq2_ena is set. Fixes: c2494d783d31 ("net/idpf: support queue start") Fixes: 8b95ced47a13 ("common/idpf: add Rx/Tx queue structs") Cc: stable@dpdk.org Signed-off-by: Beilei Xing Acked-by: Jingjing Wu --- drivers/common/idpf/idpf_common_virtchnl.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c index 99d9efbb7c..9ee7259539 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.c +++ b/drivers/common/idpf/idpf_common_virtchnl.c @@ -987,6 +987,7 @@ idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq) rxq_info->ring_len = rxq->nb_rx_desc; rxq_info->rx_bufq1_id = rxq->bufq1->queue_id; + rxq_info->bufq2_ena = 1; rxq_info->rx_bufq2_id = rxq->bufq2->queue_id; rxq_info->rx_buffer_low_watermark = 64;