From patchwork Tue Mar 12 09:22:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matt X-Patchwork-Id: 51109 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7F9B12BD3; Tue, 12 Mar 2019 10:22:41 +0100 (CET) Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by dpdk.org (Postfix) with ESMTP id C4AF42BAF for ; Tue, 12 Mar 2019 10:22:40 +0100 (CET) Received: by mail-pf1-f193.google.com with SMTP id s23so1350516pfe.13 for ; Tue, 12 Mar 2019 02:22:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=heW/MIyOIWjCt6C6tiGl7HW7bmXkQmttn9rcYzFOvFM=; b=Iy/Twj7MO4trMyh9BNAYjqmeijwjtRuJR9xlE+4GtaAroZ6f1h2DeA3F0XdYQlrBmf UF5e7A8KgXXPq9pRgNjL3lKWvt2uROGRWtjDIXLvV9fKarMmMGO6MPF41L+4Rogt2BaF 3qyUDvD4O25MG99n6uftvxlp0WCzuJIfZMrxVwcOGKnF0VhwZoLabUeyBYLRcXv8QG4y S69PQAWBVJkKJ9VMpMqw6FVg9c8mQytpqA5EJEaSi17gv9P+eMMnvzT6RTH9yuYiUqIZ VotZh6UyEK1c/NYzAbBub1aaxYQpqfJzLy7RvpGvLdxAKcp7N6srN9oNAExQYnuseFUX Ewbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=heW/MIyOIWjCt6C6tiGl7HW7bmXkQmttn9rcYzFOvFM=; b=J5XifClIEVLzBQ87huvHl8sJRUw8RR9C0z7h0XOKet4QJyi99wQwied3UH/h+RW/Tk wnzLByRluZm6aoQya1JLTTrL8XypIPlq1ADr7LumkPRcUH1EqJeTMBoL3GV5TJoGnwEF Fulfz7X31Ek6hHL8au9hhrfSQyr9AF1DxNYG2Ax4pAJZYFh9dAEj8/w/SkQPEqUN8wh3 WO8HwF3nBDV6HqVGDOiTpdnJcg3mzVOt7Mq2mHl6MU79qDCEI1O9Jv3djrymshA9AvAd 04Pb9SPL2H8SP0Ki7NMwJL/EeOg/waw9/V3rfKXjPv9TTcAI9lQMWwyJ9B1/E4njGOwV GuqA== X-Gm-Message-State: APjAAAUpWoyFdP2ccGjLcw2wZMEfuyBivLNR5F18jazTNiZaLuEH7RPO V37UyBItuFtKVAnb7jU6lLchNm1aQXE= X-Google-Smtp-Source: APXvYqwEbXCqXU/xc1Xn8aXeM1ctj/M1KTihKuFTACKEN2kF9uR5eGTSuGyzn9KsWeKP4b9alQBMMw== X-Received: by 2002:a63:ad4f:: with SMTP id y15mr10004886pgo.5.1552382559790; Tue, 12 Mar 2019 02:22:39 -0700 (PDT) Received: from yateszhou-PC0.tencent.com ([203.205.141.49]) by smtp.gmail.com with ESMTPSA id y14sm18869110pgs.47.2019.03.12.02.22.38 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Mar 2019 02:22:39 -0700 (PDT) From: Yangchao Zhou To: dev@dpdk.org Cc: ferruh.yigit@intel.com Date: Tue, 12 Mar 2019 17:22:32 +0800 Message-Id: <20190312092232.93640-1-zhouyates@gmail.com> In-Reply-To: <20190228073010.49716-1-zhouyates@gmail.com> References: <20190228073010.49716-1-zhouyates@gmail.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2] kni: fix possible kernel crash with va2pa X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" va2pa depends on the physical address and virtual address offset of current mbuf. It may get the wrong physical address of next mbuf which allocated in another hugepage segment. In rte_mempool_populate_default(), trying to allocate whole block of contiguous memory could be failed. Then, it would reserve memory in several memzones that have different physical address and virtual address offsets. The rte_mempool_populate_default() is used by rte_pktmbuf_pool_create(). Signed-off-by: Yangchao Zhou --- v2: Add an explanation that causes this problem. Use m->next to store physical address. --- kernel/linux/kni/kni_net.c | 42 +++++++++++-------- .../eal/include/exec-env/rte_kni_common.h | 2 +- lib/librte_kni/rte_kni.c | 15 ++++++- 3 files changed, 40 insertions(+), 19 deletions(-) diff --git a/kernel/linux/kni/kni_net.c b/kernel/linux/kni/kni_net.c index 7371b6d58..106b5153f 100644 --- a/kernel/linux/kni/kni_net.c +++ b/kernel/linux/kni/kni_net.c @@ -61,18 +61,6 @@ kva2data_kva(struct rte_kni_mbuf *m) return phys_to_virt(m->buf_physaddr + m->data_off); } -/* virtual address to physical address */ -static void * -va2pa(void *va, struct rte_kni_mbuf *m) -{ - void *pa; - - pa = (void *)((unsigned long)va - - ((unsigned long)m->buf_addr - - (unsigned long)m->buf_physaddr)); - return pa; -} - /* * It can be called to process the request. */ @@ -173,7 +161,10 @@ kni_fifo_trans_pa2va(struct kni_dev *kni, struct rte_kni_fifo *src_pa, struct rte_kni_fifo *dst_va) { uint32_t ret, i, num_dst, num_rx; - void *kva; + struct rte_kni_mbuf *kva, *_kva; + int nb_segs; + int kva_nb_segs; + do { num_dst = kni_fifo_free_count(dst_va); if (num_dst == 0) @@ -188,6 +179,17 @@ kni_fifo_trans_pa2va(struct kni_dev *kni, for (i = 0; i < num_rx; i++) { kva = pa2kva(kni->pa[i]); kni->va[i] = pa2va(kni->pa[i], kva); + + kva_nb_segs = kva->nb_segs; + for (nb_segs = 0; nb_segs < kva_nb_segs; nb_segs++) { + if (!kva->next) + break; + + _kva = kva; + kva = pa2kva(kva->next); + /* Convert physical address to virtual address */ + _kva->next = pa2va(_kva->next, kva); + } } ret = kni_fifo_put(dst_va, kni->va, num_rx); @@ -313,7 +315,7 @@ kni_net_rx_normal(struct kni_dev *kni) uint32_t ret; uint32_t len; uint32_t i, num_rx, num_fq; - struct rte_kni_mbuf *kva; + struct rte_kni_mbuf *kva, *_kva; void *data_kva; struct sk_buff *skb; struct net_device *dev = kni->net_dev; @@ -363,8 +365,11 @@ kni_net_rx_normal(struct kni_dev *kni) if (!kva->next) break; - kva = pa2kva(va2pa(kva->next, kva)); + _kva = kva; + kva = pa2kva(kva->next); data_kva = kva2data_kva(kva); + /* Convert physical address to virtual address */ + _kva->next = pa2va(_kva->next, kva); } } @@ -481,7 +486,7 @@ kni_net_rx_lo_fifo_skb(struct kni_dev *kni) uint32_t ret; uint32_t len; uint32_t i, num_rq, num_fq, num; - struct rte_kni_mbuf *kva; + struct rte_kni_mbuf *kva, *_kva; void *data_kva; struct sk_buff *skb; struct net_device *dev = kni->net_dev; @@ -545,8 +550,11 @@ kni_net_rx_lo_fifo_skb(struct kni_dev *kni) if (!kva->next) break; - kva = pa2kva(va2pa(kva->next, kva)); + _kva = kva; + kva = pa2kva(kva->next); data_kva = kva2data_kva(kva); + /* Convert physical address to virtual address */ + _kva->next = pa2va(_kva->next, kva); } } diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h index 5afa08713..688db9758 100644 --- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h +++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h @@ -86,7 +86,7 @@ struct rte_kni_mbuf { /* fields on second cache line */ char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_MIN_SIZE))); void *pool; - void *next; + void *next; /**< Physical address of next mbuf in kernel. */ }; /* diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c index 73aeccccf..74b1ff5b6 100644 --- a/lib/librte_kni/rte_kni.c +++ b/lib/librte_kni/rte_kni.c @@ -353,6 +353,19 @@ va2pa(struct rte_mbuf *m) (unsigned long)m->buf_iova)); } +static void * +va2pa_all(struct rte_mbuf *mbuf) +{ + void *phy_mbuf = va2pa(mbuf); + struct rte_mbuf *next = mbuf->next; + while (next) { + mbuf->next = va2pa(next); + mbuf = next; + next = mbuf->next; + } + return phy_mbuf; +} + static void obj_free(struct rte_mempool *mp __rte_unused, void *opaque, void *obj, unsigned obj_idx __rte_unused) @@ -550,7 +563,7 @@ rte_kni_tx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned num) unsigned int i; for (i = 0; i < num; i++) - phy_mbufs[i] = va2pa(mbufs[i]); + phy_mbufs[i] = va2pa_all(mbufs[i]); ret = kni_fifo_put(kni->rx_q, phy_mbufs, num);