From patchwork Fri Feb 18 11:20:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Loftus, Ciara" X-Patchwork-Id: 107800 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CE268A0032; Fri, 18 Feb 2022 12:21:16 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 93DC740395; Fri, 18 Feb 2022 12:21:16 +0100 (CET) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 63BCE40141; Fri, 18 Feb 2022 12:21:15 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645183275; x=1676719275; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=ohCFqaYz1vv75Ycg8FMsjVX7pEkQqlIFVIhbxiKXUCE=; b=KoYFTir6c3bL3c9AsxnLnZ0D7L2WZ8ZH+Aut+NlX17Klyp+B3a4t8nW0 o+WgtPlfD+4yO5VX8fD+kGFpfD9XYyWDsHBp5zY6HQVmFalAHB73pS3rw Fhm9u+w0PXx+DWpJpHsNKuZT+B3qSnNuAgKAAyi5xsVc4S9JmoQAgL0FA +q4atwfp0I3YMoIxYMNnXFhOxkPmmVd2cciVlbPYXofsv16HCbTj7Vkc+ ZX0eQaN+4PlxhPhufZonrkxGbhiXZ2FZ3VqPstTVkr/Cc3i3nKcfM7QyW HxLjQ/8FEVLsMm73p5Aq6fVwvEMcJ8tEbHP2z+pwRqUCwehQFJrhH7mxA Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10261"; a="311850538" X-IronPort-AV: E=Sophos;i="5.88,378,1635231600"; d="scan'208";a="311850538" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Feb 2022 03:21:14 -0800 X-IronPort-AV: E=Sophos;i="5.88,378,1635231600"; d="scan'208";a="530887260" Received: from silpixa00401086.ir.intel.com ([10.55.128.118]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Feb 2022 03:21:13 -0800 From: Ciara Loftus To: dev@dpdk.org Cc: Ciara Loftus , stable@dpdk.org Subject: [PATCH 1/2] net/af_xdp: ensure xsk is deleted on Rx queue setup error Date: Fri, 18 Feb 2022 11:20:36 +0000 Message-Id: <20220218112037.61204-1-ciara.loftus@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The Rx queue setup can fail for many reasons eg. failure to setup the custom program, failure to allocate or reserve fill queue buffers, failure to configure busy polling etc. When a failure like one of these occurs, if the xsk is already set up it should be deleted before returning. This commit ensures this happens. Fixes: d8a210774e1d ("net/af_xdp: support unaligned umem chunks") Fixes: 288a85aef192 ("net/af_xdp: enable custom XDP program loading") Fixes: 055a393626ed ("net/af_xdp: prefer busy polling") Fixes: 01fa83c94d7e ("net/af_xdp: workaround custom program loading") Cc: stable@dpdk.org Signed-off-by: Ciara Loftus --- drivers/net/af_xdp/rte_eth_af_xdp.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index 6ac710c6bd..5f493951f6 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -1302,7 +1302,7 @@ xsk_configure(struct pmd_internals *internals, struct pkt_rx_queue *rxq, if (ret) { AF_XDP_LOG(ERR, "Failed to load custom XDP program %s\n", internals->prog_path); - goto err; + goto out_umem; } internals->custom_prog_configured = 1; cfg.libbpf_flags = XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD; @@ -1319,7 +1319,7 @@ xsk_configure(struct pmd_internals *internals, struct pkt_rx_queue *rxq, if (ret) { AF_XDP_LOG(ERR, "Failed to create xsk socket.\n"); - goto err; + goto out_umem; } /* insert the xsk into the xsks_map */ @@ -1331,7 +1331,7 @@ xsk_configure(struct pmd_internals *internals, struct pkt_rx_queue *rxq, &rxq->xsk_queue_idx, &fd, 0); if (err) { AF_XDP_LOG(ERR, "Failed to insert xsk in map.\n"); - goto err; + goto out_xsk; } } @@ -1339,7 +1339,7 @@ xsk_configure(struct pmd_internals *internals, struct pkt_rx_queue *rxq, ret = rte_pktmbuf_alloc_bulk(rxq->umem->mb_pool, fq_bufs, reserve_size); if (ret) { AF_XDP_LOG(DEBUG, "Failed to get enough buffers for fq.\n"); - goto err; + goto out_xsk; } #endif @@ -1347,20 +1347,21 @@ xsk_configure(struct pmd_internals *internals, struct pkt_rx_queue *rxq, ret = configure_preferred_busy_poll(rxq); if (ret) { AF_XDP_LOG(ERR, "Failed configure busy polling.\n"); - goto err; + goto out_xsk; } } ret = reserve_fill_queue(rxq->umem, reserve_size, fq_bufs, &rxq->fq); if (ret) { - xsk_socket__delete(rxq->xsk); AF_XDP_LOG(ERR, "Failed to reserve fill queue.\n"); - goto err; + goto out_xsk; } return 0; -err: +out_xsk: + xsk_socket__delete(rxq->xsk); +out_umem: if (__atomic_sub_fetch(&rxq->umem->refcnt, 1, __ATOMIC_ACQUIRE) == 0) xdp_umem_destroy(rxq->umem);