From patchwork Fri Feb 18 11:20:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Loftus, Ciara" X-Patchwork-Id: 107801 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4FA34A0032; Fri, 18 Feb 2022 12:21:21 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8510241142; Fri, 18 Feb 2022 12:21:17 +0100 (CET) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 277AA40141 for ; Fri, 18 Feb 2022 12:21:16 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645183276; x=1676719276; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RSx+iFEFnDE2UaWB//1p39JBD2YNL5n4/tTIAE7PDrA=; b=gORDMfBP0iFePv1CKIpJUnvNTtj05CzXSosok81qUt1tadoxhT5zJ1hf krHKpDUl1wtcmvUNNPNCILzXKwlw31na4Od1qY64ai8+6myVdPWvJ6Iw2 UAHeqtRDhI7bMhHqHELKoqiyCYZw6Bp2GYahr07xdV9AxNHa01jRl5fdm 16V+fUSlIOm1mJR4kLjxBuMUFuWvPY02zUa2LJUnGRRHbr35w/qiNPyZh jqv4c9nEG1ySeHm2nqMqZuTzTJeToqMyaJjxMAAA8C+4Si1QHOgU1LIrh R520XWd6aC/CcGtTPq2Bd5N7UUcGnte9mi+Daji5sVAWYBDcZjkk2f7oJ A==; X-IronPort-AV: E=McAfee;i="6200,9189,10261"; a="311850539" X-IronPort-AV: E=Sophos;i="5.88,378,1635231600"; d="scan'208";a="311850539" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Feb 2022 03:21:15 -0800 X-IronPort-AV: E=Sophos;i="5.88,378,1635231600"; d="scan'208";a="530887267" Received: from silpixa00401086.ir.intel.com ([10.55.128.118]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Feb 2022 03:21:14 -0800 From: Ciara Loftus To: dev@dpdk.org Cc: Ciara Loftus Subject: [PATCH 2/2] net/af_xdp: reserve fill queue before socket create Date: Fri, 18 Feb 2022 11:20:37 +0000 Message-Id: <20220218112037.61204-2-ciara.loftus@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220218112037.61204-1-ciara.loftus@intel.com> References: <20220218112037.61204-1-ciara.loftus@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Some zero copy AF_XDP drivers eg. ice require that there are addresses already in the fill queue before the socket is created. Otherwise you may see log messages such as: XSK buffer pool does not provide enough addresses to fill 2047 buffers on Rx ring 0 This commit ensures that the addresses are available before creating the socket, instead of after. Signed-off-by: Ciara Loftus Tested-by: Ferruh Yigit --- drivers/net/af_xdp/rte_eth_af_xdp.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index 5f493951f6..309b96c9b4 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -1284,6 +1284,20 @@ xsk_configure(struct pmd_internals *internals, struct pkt_rx_queue *rxq, return -ENOMEM; txq->umem = rxq->umem; +#if defined(XDP_UMEM_UNALIGNED_CHUNK_FLAG) + ret = rte_pktmbuf_alloc_bulk(rxq->umem->mb_pool, fq_bufs, reserve_size); + if (ret) { + AF_XDP_LOG(DEBUG, "Failed to get enough buffers for fq.\n"); + goto out_umem; + } +#endif + + ret = reserve_fill_queue(rxq->umem, reserve_size, fq_bufs, &rxq->fq); + if (ret) { + AF_XDP_LOG(ERR, "Failed to reserve fill queue.\n"); + goto out_umem; + } + cfg.rx_size = ring_size; cfg.tx_size = ring_size; cfg.libbpf_flags = 0; @@ -1335,14 +1349,6 @@ xsk_configure(struct pmd_internals *internals, struct pkt_rx_queue *rxq, } } -#if defined(XDP_UMEM_UNALIGNED_CHUNK_FLAG) - ret = rte_pktmbuf_alloc_bulk(rxq->umem->mb_pool, fq_bufs, reserve_size); - if (ret) { - AF_XDP_LOG(DEBUG, "Failed to get enough buffers for fq.\n"); - goto out_xsk; - } -#endif - if (rxq->busy_budget) { ret = configure_preferred_busy_poll(rxq); if (ret) { @@ -1351,12 +1357,6 @@ xsk_configure(struct pmd_internals *internals, struct pkt_rx_queue *rxq, } } - ret = reserve_fill_queue(rxq->umem, reserve_size, fq_bufs, &rxq->fq); - if (ret) { - AF_XDP_LOG(ERR, "Failed to reserve fill queue.\n"); - goto out_xsk; - } - return 0; out_xsk: