From patchwork Thu Oct 8 09:17:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Loftus, Ciara" X-Patchwork-Id: 80000 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 68706A04BC; Thu, 8 Oct 2020 11:43:32 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 44C381BB71; Thu, 8 Oct 2020 11:43:31 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 6908C1E34 for ; Thu, 8 Oct 2020 11:43:29 +0200 (CEST) IronPort-SDR: 3PbaHaSbcv4OYQkqpwCLV1WAMxivBLenOEIGYLXeDBTdbmc1i8xrajTPxxp6pa5fjwLWeLwNWg VdmUX5bXiyDQ== X-IronPort-AV: E=McAfee;i="6000,8403,9767"; a="152225704" X-IronPort-AV: E=Sophos;i="5.77,350,1596524400"; d="scan'208";a="152225704" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Oct 2020 02:43:27 -0700 IronPort-SDR: tYeaRbi0PpUUax6viTvn1mNSwa3Z0nUmOZ2FxlRXUERV1GEIp4oQBOCIg7wGxWFIQ39/fi7jKN 6m7Ply9Gr8bQ== X-IronPort-AV: E=Sophos;i="5.77,350,1596524400"; d="scan'208";a="528427328" Received: from silpixa00399839.ir.intel.com (HELO localhost.localdomain) ([10.237.222.142]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Oct 2020 02:43:26 -0700 From: Ciara Loftus To: dev@dpdk.org Cc: Ciara Loftus Date: Thu, 8 Oct 2020 09:17:29 +0000 Message-Id: <20201008091729.4321-1-ciara.loftus@intel.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH] net/af_xdp: Don't allow umem sharing for xsks with same netdev, qid X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Supporting this would require locks, which would impact the performance of the more typical cases - xsks with different qids and netdevs. Signed-off-by: Ciara Loftus Fixes: 74b46340e2d4 ("net/af_xdp: support shared UMEM") --- drivers/net/af_xdp/rte_eth_af_xdp.c | 44 +++++++++++++++++++++++------ 1 file changed, 35 insertions(+), 9 deletions(-) diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index eaf2c9c873..9e0e5c254a 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -634,16 +634,35 @@ find_internal_resource(struct pmd_internals *port_int) return list; } +/* Check if the netdev,qid context already exists */ +static inline bool +ctx_exists(struct pkt_rx_queue *rxq, const char *ifname, + struct pkt_rx_queue *list_rxq, const char *list_ifname) +{ + bool exists = false; + + if (rxq->xsk_queue_idx == list_rxq->xsk_queue_idx && + !strncmp(ifname, list_ifname, IFNAMSIZ)) { + AF_XDP_LOG(ERR, "ctx %s,%i already exists, cannot share umem\n", + ifname, rxq->xsk_queue_idx); + exists = true; + } + + return exists; +} + /* Get a pointer to an existing UMEM which overlays the rxq's mb_pool */ -static inline struct xsk_umem_info * -get_shared_umem(struct pkt_rx_queue *rxq) { +static inline int +get_shared_umem(struct pkt_rx_queue *rxq, const char *ifname, + struct xsk_umem_info **umem) +{ struct internal_list *list; struct pmd_internals *internals; - int i = 0; + int i = 0, ret = 0; struct rte_mempool *mb_pool = rxq->mb_pool; if (mb_pool == NULL) - return NULL; + return ret; pthread_mutex_lock(&internal_list_lock); @@ -655,20 +674,25 @@ get_shared_umem(struct pkt_rx_queue *rxq) { if (rxq == list_rxq) continue; if (mb_pool == internals->rx_queues[i].mb_pool) { + if (ctx_exists(rxq, ifname, list_rxq, + internals->if_name)) { + ret = -1; + goto out; + } if (__atomic_load_n( &internals->rx_queues[i].umem->refcnt, __ATOMIC_ACQUIRE)) { - pthread_mutex_unlock( - &internal_list_lock); - return internals->rx_queues[i].umem; + *umem = internals->rx_queues[i].umem; + goto out; } } } } +out: pthread_mutex_unlock(&internal_list_lock); - return NULL; + return ret; } static int @@ -913,7 +937,9 @@ xsk_umem_info *xdp_umem_configure(struct pmd_internals *internals, uint64_t umem_size, align = 0; if (internals->shared_umem) { - umem = get_shared_umem(rxq); + if (get_shared_umem(rxq, internals->if_name, &umem) < 0) + return NULL; + if (umem != NULL && __atomic_load_n(&umem->refcnt, __ATOMIC_ACQUIRE) < umem->max_xsks) {