From patchwork Tue Nov 5 11:04:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 62452 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B6FB0A0353; Tue, 5 Nov 2019 12:04:59 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 920C92C37; Tue, 5 Nov 2019 12:04:59 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id A4E3A2C08 for ; Tue, 5 Nov 2019 12:04:57 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id xA5B4W4U032757; Tue, 5 Nov 2019 03:04:56 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=vGPQE6cie5otVLYdEQ5LnuRezICMSLB+gxGEbaElcr0=; b=ph8OBYsC9Fwi7WKu6wbJmy740ePGJnXxknZNIzME/Oo1YbpCS2Y18n8t9oU2LaG8nfM0 wDzo9E6tWG1gIaMX2dEenptihyC5sx3fz4IuYMNvRG1j+3G/4W1Bhxftl5kDOpLC9NS8 GcN/lxbydv5fU/Xyx+tU0y78mI/tTgR4chiqLHM5Kk6nZ+LZK8CC2KNcl69IeRwPkfEm hKKKImKOGkSuopfRsyN7mXVXiRHZJbSICBJ0Tocn+b/XywNbp3f1r76mMKyfoS+Mg7t5 hnknTtsTBnyZdT5vy2CSVvbrqIOjr/v0CsGgqlNRZg5nkmtnXMyYvZ57I5Vj2m24FkB9 nQ== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2w17n92gg5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 05 Nov 2019 03:04:55 -0800 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Tue, 5 Nov 2019 03:04:54 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Tue, 5 Nov 2019 03:04:54 -0800 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id 6720C3F703F; Tue, 5 Nov 2019 03:04:51 -0800 (PST) From: To: CC: , , , , , , , , Vamsi Attunuru Date: Tue, 5 Nov 2019 16:34:15 +0530 Message-ID: <20191105110416.8955-2-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20191105110416.8955-1-vattunuru@marvell.com> References: <20191021080324.10659-1-vattunuru@marvell.com> <20191105110416.8955-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-11-05_04:2019-11-05,2019-11-05 signatures=0 Subject: [dpdk-dev] [PATCH v12 1/2] kni: add IOVA=VA mode support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Current KNI implementation only operates in IOVA_PA mode patch adds required functionality to enable KNI in IOVA_VA mode. KNI can operate in IOVA_VA mode when this mode is requested using eal option (`iova-mode=va`) or when bus iommu scheme is selected as RTE_IOVA_VA. During KNI creation, app's iova_mode details are passed to the KNI kernel module, accordingly kernel module translates PA/IOVA addresses to KVA and vice-versa. Signed-off-by: Vamsi Attunuru Signed-off-by: Kiran Kumar K Suggested-by: Ferruh Yigit --- doc/guides/prog_guide/kernel_nic_interface.rst | 9 +++++++ doc/guides/rel_notes/release_19_11.rst | 5 ++++ lib/librte_eal/linux/eal/eal.c | 29 +++++++++++++---------- lib/librte_eal/linux/eal/include/rte_kni_common.h | 1 + lib/librte_kni/rte_kni.c | 7 ++---- 5 files changed, 34 insertions(+), 17 deletions(-) diff --git a/doc/guides/prog_guide/kernel_nic_interface.rst b/doc/guides/prog_guide/kernel_nic_interface.rst index 2fd58e1..f869493 100644 --- a/doc/guides/prog_guide/kernel_nic_interface.rst +++ b/doc/guides/prog_guide/kernel_nic_interface.rst @@ -300,6 +300,15 @@ The sk_buff is then freed and the mbuf sent in the tx_q FIFO. The DPDK TX thread dequeues the mbuf and sends it to the PMD via ``rte_eth_tx_burst()``. It then puts the mbuf back in the cache. +IOVA = VA: Support +------------------ + +KNI operates in IOVA_VA scheme when + +- LINUX_VERSION_CODE >= KERNEL_VERSION(4, 6, 0) and +- eal option `iova-mode=va` is passed or bus IOVA scheme in the DPDK is selected + as RTE_IOVA_VA. + Ethtool ------- diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst index ae8e7b2..e5d2996 100644 --- a/doc/guides/rel_notes/release_19_11.rst +++ b/doc/guides/rel_notes/release_19_11.rst @@ -231,6 +231,11 @@ New Features * Added a console command to testpmd app, ``show port (port_id) ptypes`` which gives ability to print port supported ptypes in different protocol layers. +* **Added IOVA as VA support for KNI.** + + Added IOVA = VA support for KNI, KNI can operate in IOVA = VA mode when + `iova-mode=va` eal option is passed to the application or when bus IOVA + scheme is selected as RTE_IOVA_VA. Removed Items ------------- diff --git a/lib/librte_eal/linux/eal/eal.c b/lib/librte_eal/linux/eal/eal.c index 9e2d50c..a1c5bf6 100644 --- a/lib/librte_eal/linux/eal/eal.c +++ b/lib/librte_eal/linux/eal/eal.c @@ -922,6 +922,19 @@ static int rte_eal_vfio_setup(void) } #endif +static enum rte_iova_mode +rte_eal_kni_get_iova_mode(enum rte_iova_mode iova_mode) +{ + if (iova_mode == RTE_IOVA_VA) { +#if KERNEL_VERSION(4, 6, 0) > LINUX_VERSION_CODE + iova_mode = RTE_IOVA_PA; + RTE_LOG(WARNING, EAL, "Forcing IOVA as 'PA' because KNI module does not support VA\n"); +#endif + } + + return iova_mode; +} + static void rte_eal_init_alert(const char *msg) { fprintf(stderr, "EAL: FATAL: %s\n", msg); @@ -1085,24 +1098,16 @@ rte_eal_init(int argc, char **argv) RTE_LOG(DEBUG, EAL, "IOMMU is not available, selecting IOVA as PA mode.\n"); } } -#ifdef RTE_LIBRTE_KNI - /* Workaround for KNI which requires physical address to work */ - if (iova_mode == RTE_IOVA_VA && - rte_eal_check_module("rte_kni") == 1) { - if (phys_addrs) { - iova_mode = RTE_IOVA_PA; - RTE_LOG(WARNING, EAL, "Forcing IOVA as 'PA' because KNI module is loaded\n"); - } else { - RTE_LOG(DEBUG, EAL, "KNI can not work since physical addresses are unavailable\n"); - } - } -#endif rte_eal_get_configuration()->iova_mode = iova_mode; } else { rte_eal_get_configuration()->iova_mode = internal_config.iova_mode; } + if (rte_eal_check_module("rte_kni") == 1) + rte_eal_get_configuration()->iova_mode = + rte_eal_kni_get_iova_mode(rte_eal_iova_mode()); + if (rte_eal_iova_mode() == RTE_IOVA_PA && !phys_addrs) { rte_eal_init_alert("Cannot use IOVA as 'PA' since physical addresses are not available"); rte_errno = EINVAL; diff --git a/lib/librte_eal/linux/eal/include/rte_kni_common.h b/lib/librte_eal/linux/eal/include/rte_kni_common.h index 46f75a7..2427a96 100644 --- a/lib/librte_eal/linux/eal/include/rte_kni_common.h +++ b/lib/librte_eal/linux/eal/include/rte_kni_common.h @@ -125,6 +125,7 @@ struct rte_kni_device_info { unsigned int min_mtu; unsigned int max_mtu; uint8_t mac_addr[6]; + uint8_t iova_mode; }; #define KNI_DEVICE "kni" diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c index 7fbcf22..7221280 100644 --- a/lib/librte_kni/rte_kni.c +++ b/lib/librte_kni/rte_kni.c @@ -97,11 +97,6 @@ static volatile int kni_fd = -1; int rte_kni_init(unsigned int max_kni_ifaces __rte_unused) { - if (rte_eal_iova_mode() != RTE_IOVA_PA) { - RTE_LOG(ERR, KNI, "KNI requires IOVA as PA\n"); - return -1; - } - /* Check FD and open */ if (kni_fd < 0) { kni_fd = open("/dev/" KNI_DEVICE, O_RDWR); @@ -302,6 +297,8 @@ rte_kni_alloc(struct rte_mempool *pktmbuf_pool, kni->group_id = conf->group_id; kni->mbuf_size = conf->mbuf_size; + dev_info.iova_mode = (rte_eal_iova_mode() == RTE_IOVA_VA) ? 1 : 0; + ret = ioctl(kni_fd, RTE_KNI_IOCTL_CREATE, &dev_info); if (ret < 0) goto ioctl_fail; From patchwork Tue Nov 5 11:04:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 62453 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8D707A0353; Tue, 5 Nov 2019 12:05:06 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 662EB3772; Tue, 5 Nov 2019 12:05:05 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 6E17F3576 for ; Tue, 5 Nov 2019 12:05:03 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id xA5B4V5g032745; Tue, 5 Nov 2019 03:05:02 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=xuNuFTn0HZ9xvvurQFHgoWzRPK3zm65s7Jbg8OyfbqY=; b=drP/U7R6Vx0Iu59QA9KsNHtkLLL4T5NtMblcxWcFCcj0/lf+t+QFM4+Bz8i/ijQUBk7E pedjn/tO77YE6CznzXJbd2IimbIKmaYUVXQqpQhrsUERZUWNrUlo0lsLlV3AwNDOIcMp i0jlQyPVBs1QKwMyMaSCpLEJIVJO03mqEqYv+pqV7ee/dJDTZ3/YKhEWUyPvjKXA0EUN 0ba7LdVTMa16Jbsnn/6qk9izUTYWEyJDXHPOHZx4po8nKWFJ4CAUJ9J/S8SKzfluGvgi xywC0OcNymwWFhU03nCoFMwX0ifv+VA7ByshvYLDtGkv26YbqhcAtLy2jdRmap+LwjHA Lw== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2w17n92ggg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 05 Nov 2019 03:05:02 -0800 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Tue, 5 Nov 2019 03:05:00 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Tue, 5 Nov 2019 03:05:00 -0800 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id 806093F703F; Tue, 5 Nov 2019 03:04:57 -0800 (PST) From: To: CC: , , , , , , , , Vamsi Attunuru Date: Tue, 5 Nov 2019 16:34:16 +0530 Message-ID: <20191105110416.8955-3-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20191105110416.8955-1-vattunuru@marvell.com> References: <20191021080324.10659-1-vattunuru@marvell.com> <20191105110416.8955-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-11-05_04:2019-11-05,2019-11-05 signatures=0 Subject: [dpdk-dev] [PATCH v12 2/2] kni: add IOVA=VA support in kernel module X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Patch adds support for kernel module to work in IOVA = VA mode by providing address translation routines to convert IOVA aka user space VA to kernel virtual addresses. When compared with IOVA=PA mode, there is no drop in performance with this approach, also there is no performance impact on IOVA=PA mode with this patch. This approach does not work with the kernel versions less than 4.6.0. Signed-off-by: Vamsi Attunuru Signed-off-by: Kiran Kumar K --- kernel/linux/kni/compat.h | 15 +++++++++++ kernel/linux/kni/kni_dev.h | 42 ++++++++++++++++++++++++++++++ kernel/linux/kni/kni_misc.c | 39 +++++++++++++++++++++------- kernel/linux/kni/kni_net.c | 62 +++++++++++++++++++++++++++++++++++---------- 4 files changed, 136 insertions(+), 22 deletions(-) diff --git a/kernel/linux/kni/compat.h b/kernel/linux/kni/compat.h index 562d8bf..6591dd7 100644 --- a/kernel/linux/kni/compat.h +++ b/kernel/linux/kni/compat.h @@ -121,3 +121,18 @@ #if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 11, 0) #define HAVE_SIGNAL_FUNCTIONS_OWN_HEADER #endif + +#if KERNEL_VERSION(4, 6, 0) <= LINUX_VERSION_CODE + +#define HAVE_IOVA_TO_KVA_MAPPING_SUPPORT + +#if (KERNEL_VERSION(4, 6, 0) <= LINUX_VERSION_CODE) && \ + (KERNEL_VERSION(4, 9, 0) > LINUX_VERSION_CODE) +#define GET_USER_PAGES_REMOTE_API_V1 +#elif KERNEL_VERSION(4, 9, 0) == LINUX_VERSION_CODE +#define GET_USER_PAGES_REMOTE_API_V2 +#else +#define GET_USER_PAGES_REMOTE_API_V3 +#endif + +#endif diff --git a/kernel/linux/kni/kni_dev.h b/kernel/linux/kni/kni_dev.h index c1ca678..fb641b6 100644 --- a/kernel/linux/kni/kni_dev.h +++ b/kernel/linux/kni/kni_dev.h @@ -41,6 +41,8 @@ struct kni_dev { /* kni list */ struct list_head list; + uint8_t iova_mode; + uint32_t core_id; /* Core ID to bind */ char name[RTE_KNI_NAMESIZE]; /* Network device name */ struct task_struct *pthread; @@ -84,8 +86,48 @@ struct kni_dev { void *va[MBUF_BURST_SZ]; void *alloc_pa[MBUF_BURST_SZ]; void *alloc_va[MBUF_BURST_SZ]; + + struct task_struct *usr_tsk; }; +#ifdef HAVE_IOVA_TO_KVA_MAPPING_SUPPORT +static inline phys_addr_t iova_to_phys(struct task_struct *tsk, + unsigned long iova) +{ + phys_addr_t offset, phys_addr; + struct page *page = NULL; + long ret; + + offset = iova & (PAGE_SIZE - 1); + + /* Read one page struct info */ +#ifdef GET_USER_PAGES_REMOTE_API_V3 + ret = get_user_pages_remote(tsk, tsk->mm, iova, 1, + FOLL_TOUCH, &page, NULL, NULL); +#endif +#ifdef GET_USER_PAGES_REMOTE_API_V2 + ret = get_user_pages_remote(tsk, tsk->mm, iova, 1, + FOLL_TOUCH, &page, NULL); +#endif +#ifdef GET_USER_PAGES_REMOTE_API_V1 + ret = get_user_pages_remote(tsk, tsk->mm, iova, 1 + 0, 0, &page, NULL); +#endif + if (ret < 0) + return 0; + + phys_addr = page_to_phys(page) | offset; + put_page(page); + + return phys_addr; +} + +static inline void *iova_to_kva(struct task_struct *tsk, unsigned long iova) +{ + return phys_to_virt(iova_to_phys(tsk, iova)); +} +#endif + void kni_net_release_fifo_phy(struct kni_dev *kni); void kni_net_rx(struct kni_dev *kni); void kni_net_init(struct net_device *dev); diff --git a/kernel/linux/kni/kni_misc.c b/kernel/linux/kni/kni_misc.c index 84ef03b..cda71bd 100644 --- a/kernel/linux/kni/kni_misc.c +++ b/kernel/linux/kni/kni_misc.c @@ -348,15 +348,36 @@ kni_ioctl_create(struct net *net, uint32_t ioctl_num, strncpy(kni->name, dev_info.name, RTE_KNI_NAMESIZE); /* Translate user space info into kernel space info */ - kni->tx_q = phys_to_virt(dev_info.tx_phys); - kni->rx_q = phys_to_virt(dev_info.rx_phys); - kni->alloc_q = phys_to_virt(dev_info.alloc_phys); - kni->free_q = phys_to_virt(dev_info.free_phys); - - kni->req_q = phys_to_virt(dev_info.req_phys); - kni->resp_q = phys_to_virt(dev_info.resp_phys); - kni->sync_va = dev_info.sync_va; - kni->sync_kva = phys_to_virt(dev_info.sync_phys); + if (dev_info.iova_mode) { +#ifdef HAVE_IOVA_TO_KVA_MAPPING_SUPPORT + kni->tx_q = iova_to_kva(current, dev_info.tx_phys); + kni->rx_q = iova_to_kva(current, dev_info.rx_phys); + kni->alloc_q = iova_to_kva(current, dev_info.alloc_phys); + kni->free_q = iova_to_kva(current, dev_info.free_phys); + + kni->req_q = iova_to_kva(current, dev_info.req_phys); + kni->resp_q = iova_to_kva(current, dev_info.resp_phys); + kni->sync_va = dev_info.sync_va; + kni->sync_kva = iova_to_kva(current, dev_info.sync_phys); + kni->usr_tsk = current; + kni->iova_mode = 1; +#else + pr_err("KNI module does not support IOVA to VA translation\n"); + return -EINVAL; +#endif + } else { + + kni->tx_q = phys_to_virt(dev_info.tx_phys); + kni->rx_q = phys_to_virt(dev_info.rx_phys); + kni->alloc_q = phys_to_virt(dev_info.alloc_phys); + kni->free_q = phys_to_virt(dev_info.free_phys); + + kni->req_q = phys_to_virt(dev_info.req_phys); + kni->resp_q = phys_to_virt(dev_info.resp_phys); + kni->sync_va = dev_info.sync_va; + kni->sync_kva = phys_to_virt(dev_info.sync_phys); + kni->iova_mode = 0; + } kni->mbuf_size = dev_info.mbuf_size; diff --git a/kernel/linux/kni/kni_net.c b/kernel/linux/kni/kni_net.c index f25b127..1ba9b1b 100644 --- a/kernel/linux/kni/kni_net.c +++ b/kernel/linux/kni/kni_net.c @@ -36,6 +36,22 @@ static void kni_net_rx_normal(struct kni_dev *kni); /* kni rx function pointer, with default to normal rx */ static kni_net_rx_t kni_net_rx_func = kni_net_rx_normal; +#ifdef HAVE_IOVA_TO_KVA_MAPPING_SUPPORT +/* iova to kernel virtual address */ +static inline void * +iova2kva(struct kni_dev *kni, void *iova) +{ + return phys_to_virt(iova_to_phys(kni->usr_tsk, (unsigned long)iova)); +} + +static inline void * +iova2data_kva(struct kni_dev *kni, struct rte_kni_mbuf *m) +{ + return phys_to_virt(iova_to_phys(kni->usr_tsk, m->buf_physaddr) + + m->data_off); +} +#endif + /* physical address to kernel virtual address */ static void * pa2kva(void *pa) @@ -62,6 +78,26 @@ kva2data_kva(struct rte_kni_mbuf *m) return phys_to_virt(m->buf_physaddr + m->data_off); } +static inline void * +get_kva(struct kni_dev *kni, void *pa) +{ +#ifdef HAVE_IOVA_TO_KVA_MAPPING_SUPPORT + if (kni->iova_mode == 1) + return iova2kva(kni, pa); +#endif + return pa2kva(pa); +} + +static inline void * +get_data_kva(struct kni_dev *kni, void *pkt_kva) +{ +#ifdef HAVE_IOVA_TO_KVA_MAPPING_SUPPORT + if (kni->iova_mode == 1) + return iova2data_kva(kni, pkt_kva); +#endif + return kva2data_kva(pkt_kva); +} + /* * It can be called to process the request. */ @@ -178,7 +214,7 @@ kni_fifo_trans_pa2va(struct kni_dev *kni, return; for (i = 0; i < num_rx; i++) { - kva = pa2kva(kni->pa[i]); + kva = get_kva(kni, kni->pa[i]); kni->va[i] = pa2va(kni->pa[i], kva); kva_nb_segs = kva->nb_segs; @@ -266,8 +302,8 @@ kni_net_tx(struct sk_buff *skb, struct net_device *dev) if (likely(ret == 1)) { void *data_kva; - pkt_kva = pa2kva(pkt_pa); - data_kva = kva2data_kva(pkt_kva); + pkt_kva = get_kva(kni, pkt_pa); + data_kva = get_data_kva(kni, pkt_kva); pkt_va = pa2va(pkt_pa, pkt_kva); len = skb->len; @@ -338,9 +374,9 @@ kni_net_rx_normal(struct kni_dev *kni) /* Transfer received packets to netif */ for (i = 0; i < num_rx; i++) { - kva = pa2kva(kni->pa[i]); + kva = get_kva(kni, kni->pa[i]); len = kva->pkt_len; - data_kva = kva2data_kva(kva); + data_kva = get_data_kva(kni, kva); kni->va[i] = pa2va(kni->pa[i], kva); skb = netdev_alloc_skb(dev, len); @@ -437,9 +473,9 @@ kni_net_rx_lo_fifo(struct kni_dev *kni) num = ret; /* Copy mbufs */ for (i = 0; i < num; i++) { - kva = pa2kva(kni->pa[i]); + kva = get_kva(kni, kni->pa[i]); len = kva->data_len; - data_kva = kva2data_kva(kva); + data_kva = get_data_kva(kni, kva); kni->va[i] = pa2va(kni->pa[i], kva); while (kva->next) { @@ -449,8 +485,8 @@ kni_net_rx_lo_fifo(struct kni_dev *kni) kva = next_kva; } - alloc_kva = pa2kva(kni->alloc_pa[i]); - alloc_data_kva = kva2data_kva(alloc_kva); + alloc_kva = get_kva(kni, kni->alloc_pa[i]); + alloc_data_kva = get_data_kva(kni, alloc_kva); kni->alloc_va[i] = pa2va(kni->alloc_pa[i], alloc_kva); memcpy(alloc_data_kva, data_kva, len); @@ -517,9 +553,9 @@ kni_net_rx_lo_fifo_skb(struct kni_dev *kni) /* Copy mbufs to sk buffer and then call tx interface */ for (i = 0; i < num; i++) { - kva = pa2kva(kni->pa[i]); + kva = get_kva(kni, kni->pa[i]); len = kva->pkt_len; - data_kva = kva2data_kva(kva); + data_kva = get_data_kva(kni, kva); kni->va[i] = pa2va(kni->pa[i], kva); skb = netdev_alloc_skb(dev, len); @@ -550,8 +586,8 @@ kni_net_rx_lo_fifo_skb(struct kni_dev *kni) break; prev_kva = kva; - kva = pa2kva(kva->next); - data_kva = kva2data_kva(kva); + kva = get_kva(kni, kva->next); + data_kva = get_data_kva(kni, kva); /* Convert physical address to virtual address */ prev_kva->next = pa2va(prev_kva->next, kva); }