From patchwork Mon Jan 17 16:20:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, YuanX" X-Patchwork-Id: 105915 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BB1C5A034F; Mon, 17 Jan 2022 09:29:01 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 408CD41181; Mon, 17 Jan 2022 09:29:01 +0100 (CET) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 496A54067B for ; Mon, 17 Jan 2022 09:28:59 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1642408140; x=1673944140; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=QZNaaZZv4H5NN5J53XIc5TJX2IpZMgNKMtuweB1WlMQ=; b=SR+tEUE46ysiQXmCgkHobBvrxgAqjLct748373cEW4uhCG5HxPFJh1Mh SPktedLD6PEXHkPc7E022bVM6th7HKLFCalOdjrks4Q5/6EO4zVUn2TR/ wyK9hFKgbVx/MPrNaLkyNJW8UvDloyUY0WDu+RPy5uwR2NvdBw7HrrBR7 +T77dvM9zND/WZuuX5HB3volZhez/K0sIvP3rS5GDW1JXOrFLodtT0Ebo QkhDwsvG3oiNfpl0/VQWBQcsyIGSTv7rBL7VGmL/s2ATMA+CrJGRXIIAP HmPaXnNQzpsI6sc90e5JrdpFIEG/4o+uICc3EG//U6X7/bDo9lFkZV5qI w==; X-IronPort-AV: E=McAfee;i="6200,9189,10229"; a="224559012" X-IronPort-AV: E=Sophos;i="5.88,295,1635231600"; d="scan'208";a="224559012" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jan 2022 00:28:58 -0800 X-IronPort-AV: E=Sophos;i="5.88,295,1635231600"; d="scan'208";a="692996732" Received: from dpdk.sh.intel.com (HELO localhost.localdomain) ([10.239.251.55]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jan 2022 00:28:55 -0800 From: Yuan Wang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, xuan.ding@intel.com, wenwux.ma@intel.com, yvonnex.yang@intel.com, yuanx.wang@intel.com Subject: [PATCH] vhost: fix guest physical address to host physical address mapping Date: Mon, 17 Jan 2022 16:20:27 +0000 Message-Id: <20220117162027.927041-1-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Async copy fails when looking up hpa in the gpa to hpa mapping table. This happens because the gpa is matched exactly in the merged mapping table, and the merge loses the mapping entries. A new range comparison method is introduced to solve this issue. Fixes: 6563cf92380 ("vhost: fix async copy on multi-page buffers") Signed-off-by: Yuan Wang Reviewed-by: Maxime Coquelin --- lib/vhost/vhost.h | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 9521ae56da..d4586f3341 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -588,6 +588,20 @@ static __rte_always_inline int guest_page_addrcmp(const void *p1, return 0; } +static __rte_always_inline int guest_page_rangecmp(const void *p1, const void *p2) +{ + const struct guest_page *page1 = (const struct guest_page *)p1; + const struct guest_page *page2 = (const struct guest_page *)p2; + + if (page1->guest_phys_addr >= page2->guest_phys_addr) { + if (page1->guest_phys_addr < page2->guest_phys_addr + page2->size) + return 0; + else + return 1; + } else + return -1; +} + static __rte_always_inline rte_iova_t gpa_to_first_hpa(struct virtio_net *dev, uint64_t gpa, uint64_t gpa_size, uint64_t *hpa_size) @@ -598,9 +612,9 @@ gpa_to_first_hpa(struct virtio_net *dev, uint64_t gpa, *hpa_size = gpa_size; if (dev->nr_guest_pages >= VHOST_BINARY_SEARCH_THRESH) { - key.guest_phys_addr = gpa & ~(dev->guest_pages[0].size - 1); + key.guest_phys_addr = gpa; page = bsearch(&key, dev->guest_pages, dev->nr_guest_pages, - sizeof(struct guest_page), guest_page_addrcmp); + sizeof(struct guest_page), guest_page_rangecmp); if (page) { if (gpa + gpa_size <= page->guest_phys_addr + page->size) {