From patchwork Tue Jun 25 03:56:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 55279 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 609381BC21; Tue, 25 Jun 2019 05:57:24 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 54D411BC1E for ; Tue, 25 Jun 2019 05:57:22 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5P3v5iG021379; Mon, 24 Jun 2019 20:57:21 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=deeESjb7oiamCNeJHk8XT19nQoSn0mt4S3Gxxnfpb/k=; b=ULYphC/BzrhcxXoxeTXtgGQx1m6FNK/sOphWOjhOQc5AVtABYaMFUp705TAHPc72Ad+6 aBsSf7dIv+lDsbNWBbBOGuI03CAR5MqYH4ENdPS/oEhEsJWU8MCCAhSZA2F94IlwYw6p bs4L6vWlBkY83q1cmMLpYZRk+OlfBcHgggmmUszkzdytaOZd6XRD08hHgIx8CIZFtmCD 4FUOtrDqZNnDaU5CvykSbnzbi895Q1fUPJNev+WSFNPqbVSuq8I75gx7boNyIkLU9ooB 6GfKrE/efhj7wKR1r+UYnUkydsuCLZCkGFOg6dsnZ4C6PRlLPpq7py/doKqTTx7FTw6t xA== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2tb2hqj11j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 24 Jun 2019 20:57:20 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 24 Jun 2019 20:57:19 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 24 Jun 2019 20:57:19 -0700 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id BA4CB3F703F; Mon, 24 Jun 2019 20:57:16 -0700 (PDT) From: To: CC: , , , Vamsi Attunuru , "Kiran Kumar K" Date: Tue, 25 Jun 2019 09:26:59 +0530 Message-ID: <20190625035700.2953-4-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20190625035700.2953-1-vattunuru@marvell.com> References: <20190422061533.17538-1-kirankumark@marvell.com> <20190625035700.2953-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-25_02:, , signatures=0 Subject: [dpdk-dev] [PATCH v6 3/4] example/kni: add IOVA support for kni application X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Current KNI implementation operates in IOVA = PA mode, Patch adds support for IOVA = VA mode by addressing the issues with page address translations(IOVA <==> KVA). In this patch KNI application creates mempool with "MEMPOOL_F_NO_PAGE_BOUND" flag to ensure all mbuf memory is with in the page boundaries and subsequently kernel KNI module uses iommu_iova_to_phys() and phys_to_virt() APIs to get the kernel virtual addresses. Signed-off-by: Vamsi Attunuru Signed-off-by: Kiran Kumar K --- examples/kni/main.c | 53 ++++++++++++++++++++++- lib/librte_eal/linux/eal/eal.c | 8 ---- lib/librte_eal/linux/eal/include/rte_kni_common.h | 1 + lib/librte_kni/rte_kni.c | 2 + 4 files changed, 55 insertions(+), 9 deletions(-) diff --git a/examples/kni/main.c b/examples/kni/main.c index 4710d71..13083a7 100644 --- a/examples/kni/main.c +++ b/examples/kni/main.c @@ -37,6 +37,7 @@ #include #include #include +#include #include #include #include @@ -945,6 +946,56 @@ kni_free_kni(uint16_t port_id) return 0; } +static struct rte_mempool * +kni_packet_pool_create(const char *name, unsigned int n, + unsigned int cache_size, uint16_t priv_size, uint16_t data_room_size, + int socket_id) +{ + struct rte_pktmbuf_pool_private mbp_priv; + const char *mp_ops_name; + struct rte_mempool *mp; + unsigned int elt_size; + int ret; + + if (RTE_ALIGN(priv_size, RTE_MBUF_PRIV_ALIGN) != priv_size) { + RTE_LOG(ERR, MBUF, "mbuf priv_size=%u is not aligned\n", + priv_size); + rte_errno = EINVAL; + return NULL; + } + elt_size = sizeof(struct rte_mbuf) + (unsigned int)priv_size + + (unsigned int)data_room_size; + mbp_priv.mbuf_data_room_size = data_room_size; + mbp_priv.mbuf_priv_size = priv_size; + + mp = rte_mempool_create_empty(name, n, elt_size, cache_size, + sizeof(struct rte_pktmbuf_pool_private), socket_id, + MEMPOOL_F_NO_PAGE_BOUND); + if (mp == NULL) + return NULL; + + mp_ops_name = rte_mbuf_best_mempool_ops(); + ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); + if (ret != 0) { + RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); + rte_mempool_free(mp); + rte_errno = -ret; + return NULL; + } + rte_pktmbuf_pool_init(mp, &mbp_priv); + + ret = rte_mempool_populate_default(mp); + if (ret < 0) { + rte_mempool_free(mp); + rte_errno = -ret; + return NULL; + } + + rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL); + + return mp; +} + /* Initialise ports/queues etc. and start main loop on each core */ int main(int argc, char** argv) @@ -975,7 +1026,7 @@ main(int argc, char** argv) rte_exit(EXIT_FAILURE, "Could not parse input parameters\n"); /* Create the mbuf pool */ - pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, + pktmbuf_pool = kni_packet_pool_create("mbuf_pool", NB_MBUF, MEMPOOL_CACHE_SZ, 0, MBUF_DATA_SZ, rte_socket_id()); if (pktmbuf_pool == NULL) { rte_exit(EXIT_FAILURE, "Could not initialise mbuf pool\n"); diff --git a/lib/librte_eal/linux/eal/eal.c b/lib/librte_eal/linux/eal/eal.c index 3e1d6eb..d143c49 100644 --- a/lib/librte_eal/linux/eal/eal.c +++ b/lib/librte_eal/linux/eal/eal.c @@ -1041,14 +1041,6 @@ rte_eal_init(int argc, char **argv) rte_eal_get_configuration()->iova_mode = rte_bus_get_iommu_class(); - /* Workaround for KNI which requires physical address to work */ - if (rte_eal_get_configuration()->iova_mode == RTE_IOVA_VA && - rte_eal_check_module("rte_kni") == 1) { - rte_eal_get_configuration()->iova_mode = RTE_IOVA_PA; - RTE_LOG(WARNING, EAL, - "Some devices want IOVA as VA but PA will be used because.. " - "KNI module inserted\n"); - } } else { rte_eal_get_configuration()->iova_mode = internal_config.iova_mode; diff --git a/lib/librte_eal/linux/eal/include/rte_kni_common.h b/lib/librte_eal/linux/eal/include/rte_kni_common.h index 5db5a13..404c85d 100644 --- a/lib/librte_eal/linux/eal/include/rte_kni_common.h +++ b/lib/librte_eal/linux/eal/include/rte_kni_common.h @@ -128,6 +128,7 @@ struct rte_kni_device_info { unsigned mbuf_size; unsigned int mtu; uint8_t mac_addr[6]; + uint8_t iova_mode; }; #define KNI_DEVICE "kni" diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c index 99c4bf5..4263f21 100644 --- a/lib/librte_kni/rte_kni.c +++ b/lib/librte_kni/rte_kni.c @@ -300,6 +300,8 @@ rte_kni_alloc(struct rte_mempool *pktmbuf_pool, kni->group_id = conf->group_id; kni->mbuf_size = conf->mbuf_size; + dev_info.iova_mode = (rte_eal_iova_mode() == RTE_IOVA_VA) ? 1 : 0; + ret = ioctl(kni_fd, RTE_KNI_IOCTL_CREATE, &dev_info); if (ret < 0) goto ioctl_fail;