From patchwork Mon Jul 29 12:13:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 57222 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 65A551BF53; Mon, 29 Jul 2019 14:13:39 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 1CE9B1BF51 for ; Mon, 29 Jul 2019 14:13:35 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x6TCAJBv025747; Mon, 29 Jul 2019 05:13:34 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=EHw0uM827wGIBOe/pUyVBH6DDA72kyO9p9v49qBtTEc=; b=JfH3Md9dpy/ryAK0vuc8aop/GRJxYx2c8zgiyZdfNhti+Tm7h5RHy6wWAk8B+x1iv7Af oh7gwzG6Ih4df6ElOVzgih+4DuM3OTbpgLXQyB6krXHTfPF3hQ3bw/SJNoye8VNyp9wD hgBjmg9mWRCad7UU9dh0wjM8e+jRL59PBjQaP1j6Zs20Ofc4VZkOqI6QBuNLgu7obS6t zfvSmF6u7PVe9Dg98a4pGFTdO5C75evde7WKJNUKPC4SbNRN7u3IeJdOW0wp6NxFlBGa L7bIMlZVHorpmfza3Lg5GTWJnVXjEjUmnPbl+9RRfXLyDFNgxh2QzVVUi0C7s6h/qIyA tA== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2u0p4ky2kv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 29 Jul 2019 05:13:34 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 29 Jul 2019 05:13:32 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 29 Jul 2019 05:13:32 -0700 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id 5BE5C3F7040; Mon, 29 Jul 2019 05:13:29 -0700 (PDT) From: To: CC: , , , , , , , Vamsi Attunuru Date: Mon, 29 Jul 2019 17:43:09 +0530 Message-ID: <20190729121313.30639-2-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20190729121313.30639-1-vattunuru@marvell.com> References: <20190723053821.30227-1-vattunuru@marvell.com> <20190729121313.30639-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:5.22.84,1.0.8 definitions=2019-07-29_06:2019-07-29,2019-07-29 signatures=0 Subject: [dpdk-dev] [PATCH v9 1/5] mempool: populate mempool with the page sized chunks of memory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Patch adds a routine to populate mempool from page aligned and page sized chunks of memory to ensure memory objs do not fall across the page boundaries. It's useful for applications that require physically contiguous mbuf memory while running in IOVA=VA mode. Signed-off-by: Vamsi Attunuru Signed-off-by: Kiran Kumar K Acked-by: Andrew Rybchenko --- lib/librte_mempool/rte_mempool.c | 64 ++++++++++++++++++++++++++++++ lib/librte_mempool/rte_mempool.h | 17 ++++++++ lib/librte_mempool/rte_mempool_version.map | 1 + 3 files changed, 82 insertions(+) diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 7260ce0..00619bd 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -414,6 +414,70 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, return ret; } +/* Function to populate mempool from page sized mem chunks, allocate page size + * of memory in memzone and populate them. Return the number of objects added, + * or a negative value on error. + */ +int +rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool *mp) +{ + char mz_name[RTE_MEMZONE_NAMESIZE]; + size_t align, pg_sz, pg_shift; + const struct rte_memzone *mz; + unsigned int mz_id, n; + size_t min_chunk_size; + int ret; + + ret = mempool_ops_alloc_once(mp); + if (ret != 0) + return ret; + + if (mp->nb_mem_chunks != 0) + return -EEXIST; + + pg_sz = get_min_page_size(mp->socket_id); + pg_shift = rte_bsf32(pg_sz); + + for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) { + + ret = rte_mempool_ops_calc_mem_size(mp, n, + pg_shift, &min_chunk_size, &align); + + if (ret < 0 || min_chunk_size > pg_sz) + goto fail; + + ret = snprintf(mz_name, sizeof(mz_name), + RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id); + if (ret < 0 || ret >= (int)sizeof(mz_name)) { + ret = -ENAMETOOLONG; + goto fail; + } + + mz = rte_memzone_reserve_aligned(mz_name, min_chunk_size, + mp->socket_id, 0, align); + + if (mz == NULL) { + ret = -rte_errno; + goto fail; + } + + ret = rte_mempool_populate_iova(mp, mz->addr, + mz->iova, mz->len, + rte_mempool_memchunk_mz_free, + (void *)(uintptr_t)mz); + if (ret < 0) { + rte_memzone_free(mz); + goto fail; + } + } + + return mp->size; + +fail: + rte_mempool_free_memchunks(mp); + return ret; +} + /* Default function to populate the mempool: allocate memory in memzones, * and populate them. Return the number of objects added, or a negative * value on error. diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 8053f7a..3046e4f 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -1062,6 +1062,23 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, void *opaque); /** + * Add memory from page sized memzones for objects in the pool at init + * + * This is the function used to populate the mempool with page aligned and + * page sized memzone memory to avoid spreading object memory across two pages + * and to ensure all mempool objects reside on the page memory. + * + * @param mp + * A pointer to the mempool structure. + * @return + * The number of objects added on success. + * On error, the chunk is not added in the memory list of the + * mempool and a negative errno is returned. + */ +__rte_experimental +int rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool *mp); + +/** * Add memory for objects in the pool at init * * This is the default function used by rte_mempool_create() to populate diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map index 17cbca4..9a6fe65 100644 --- a/lib/librte_mempool/rte_mempool_version.map +++ b/lib/librte_mempool/rte_mempool_version.map @@ -57,4 +57,5 @@ EXPERIMENTAL { global: rte_mempool_ops_get_info; + rte_mempool_populate_from_pg_sz_chunks; }; From patchwork Mon Jul 29 12:13:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 57223 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7D1191BF58; Mon, 29 Jul 2019 14:13:44 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 6734A1BF58 for ; Mon, 29 Jul 2019 14:13:39 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x6TCAOIH027444; Mon, 29 Jul 2019 05:13:36 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=qJN1nrYJIHYpBBwdx25T0sJWazuYXaXe7jthBhhGULk=; b=QQj+YQ3oJcbD2UhVipB5v+fYGxgWIdFSQzllWSg6K8+xO8vGLgfHcIaJvrFuAZf6g6sP w22jKOeanDHFD748wMx9PzNqrg9seTsE5uoeItPSXa6PxsJlLSsNb7MCyOqcnNvHqvvk dRa9KhBWGU4+o5GG/o0X05J3JoXnZX1qIEIX0LpJpXTrdKD5wRMe0BjmlFW7lejNT2io svq0EVLLgoSUu+aM9ypmcklGGtGbmlfC+qBjR+4evRqWwwq5v6jZh1tc36DTBLuxKhDp NZ7hmQZxC6tXnPcVPeDODbmdMUFKeyIltS3S5DDvzLxj8gXmv7x6szE9YFo+KCZFXAVW 0g== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2u0kyq017h-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 29 Jul 2019 05:13:36 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 29 Jul 2019 05:13:35 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 29 Jul 2019 05:13:35 -0700 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id E3D5F3F704D; Mon, 29 Jul 2019 05:13:32 -0700 (PDT) From: To: CC: , , , , , , , Vamsi Attunuru Date: Mon, 29 Jul 2019 17:43:10 +0530 Message-ID: <20190729121313.30639-3-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20190729121313.30639-1-vattunuru@marvell.com> References: <20190723053821.30227-1-vattunuru@marvell.com> <20190729121313.30639-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:5.22.84,1.0.8 definitions=2019-07-29_06:2019-07-29,2019-07-29 signatures=0 Subject: [dpdk-dev] [PATCH v9 2/5] kni: add IOVA=VA support in KNI lib X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Current KNI implementation only operates in IOVA=PA mode, patch adds required functionality in KNI lib to support IOVA=VA mode. KNI kernel module requires device info to get iommu domain related information for IOVA addr related translations. Patch defines device related info in rte_kni_device_info structure and passes device info to the kernel KNI module when IOVA=VA mode is enabled. Signed-off-by: Vamsi Attunuru Signed-off-by: Kiran Kumar K --- lib/librte_eal/linux/eal/include/rte_kni_common.h | 8 ++++++ lib/librte_kni/Makefile | 1 + lib/librte_kni/meson.build | 1 + lib/librte_kni/rte_kni.c | 30 +++++++++++++++++++++++ 4 files changed, 40 insertions(+) diff --git a/lib/librte_eal/linux/eal/include/rte_kni_common.h b/lib/librte_eal/linux/eal/include/rte_kni_common.h index 37d9ee8..4fd8a90 100644 --- a/lib/librte_eal/linux/eal/include/rte_kni_common.h +++ b/lib/librte_eal/linux/eal/include/rte_kni_common.h @@ -111,6 +111,13 @@ struct rte_kni_device_info { void * mbuf_va; phys_addr_t mbuf_phys; + /* PCI info */ + uint16_t vendor_id; /**< Vendor ID or PCI_ANY_ID. */ + uint16_t device_id; /**< Device ID or PCI_ANY_ID. */ + uint8_t bus; /**< Device bus */ + uint8_t devid; /**< Device ID */ + uint8_t function; /**< Device function. */ + uint16_t group_id; /**< Group ID */ uint32_t core_id; /**< core ID to bind for kernel thread */ @@ -121,6 +128,7 @@ struct rte_kni_device_info { unsigned mbuf_size; unsigned int mtu; uint8_t mac_addr[6]; + uint8_t iova_mode; }; #define KNI_DEVICE "kni" diff --git a/lib/librte_kni/Makefile b/lib/librte_kni/Makefile index cbd6599..ab15d10 100644 --- a/lib/librte_kni/Makefile +++ b/lib/librte_kni/Makefile @@ -7,6 +7,7 @@ include $(RTE_SDK)/mk/rte.vars.mk LIB = librte_kni.a CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 -fno-strict-aliasing +CFLAGS += -I$(RTE_SDK)/drivers/bus/pci LDLIBS += -lrte_eal -lrte_mempool -lrte_mbuf -lrte_ethdev EXPORT_MAP := rte_kni_version.map diff --git a/lib/librte_kni/meson.build b/lib/librte_kni/meson.build index 41fa2e3..fd46f87 100644 --- a/lib/librte_kni/meson.build +++ b/lib/librte_kni/meson.build @@ -9,3 +9,4 @@ version = 2 sources = files('rte_kni.c') headers = files('rte_kni.h') deps += ['ethdev', 'pci'] +includes += include_directories('../../drivers/bus/pci') diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c index 4b51fb4..2aaaeaa 100644 --- a/lib/librte_kni/rte_kni.c +++ b/lib/librte_kni/rte_kni.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -199,6 +200,27 @@ kni_release_mz(struct rte_kni *kni) rte_memzone_free(kni->m_sync_addr); } +static void +kni_dev_pci_addr_get(uint16_t port_id, struct rte_kni_device_info *kni_dev_info) +{ + const struct rte_pci_device *pci_dev; + struct rte_eth_dev_info dev_info; + const struct rte_bus *bus = NULL; + + rte_eth_dev_info_get(port_id, &dev_info); + + if (dev_info.device) + bus = rte_bus_find_by_device(dev_info.device); + if (bus && !strcmp(bus->name, "pci")) { + pci_dev = RTE_DEV_TO_PCI(dev_info.device); + kni_dev_info->bus = pci_dev->addr.bus; + kni_dev_info->devid = pci_dev->addr.devid; + kni_dev_info->function = pci_dev->addr.function; + kni_dev_info->vendor_id = pci_dev->id.vendor_id; + kni_dev_info->device_id = pci_dev->id.device_id; + } +} + struct rte_kni * rte_kni_alloc(struct rte_mempool *pktmbuf_pool, const struct rte_kni_conf *conf, @@ -247,6 +269,12 @@ rte_kni_alloc(struct rte_mempool *pktmbuf_pool, kni->ops.port_id = UINT16_MAX; memset(&dev_info, 0, sizeof(dev_info)); + + if (rte_eal_iova_mode() == RTE_IOVA_VA) { + uint16_t port_id = conf->group_id; + + kni_dev_pci_addr_get(port_id, &dev_info); + } dev_info.core_id = conf->core_id; dev_info.force_bind = conf->force_bind; dev_info.group_id = conf->group_id; @@ -300,6 +328,8 @@ rte_kni_alloc(struct rte_mempool *pktmbuf_pool, kni->group_id = conf->group_id; kni->mbuf_size = conf->mbuf_size; + dev_info.iova_mode = (rte_eal_iova_mode() == RTE_IOVA_VA) ? 1 : 0; + ret = ioctl(kni_fd, RTE_KNI_IOCTL_CREATE, &dev_info); if (ret < 0) goto ioctl_fail; From patchwork Mon Jul 29 12:13:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 57224 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7DB8A1BF5E; Mon, 29 Jul 2019 14:13:46 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id E61741BE83 for ; Mon, 29 Jul 2019 14:13:42 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x6TCAOPv027447; Mon, 29 Jul 2019 05:13:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=bjuGPQCoeAatfv+tL5VcT1z/9Bt59vdi4ScjmsTUQ4o=; b=b1Lzsyn3x0At5D/hbOO80nDlePkjJWRbP415nubKJXWdFxT9OUvJNSOXR+z71gBjFm4K REjCqpWNfsONQNSkShQD09ZmlUBmu3mvRmtW2XLIjPx133WBlj3Gadx6614mIwjZRbcd 5xp4Cixo27kFfZldj//CyzCWEOGM+i1O96YV54ROpPqjm6PSQTN35ACSe81F0OtqQsSw ZFobvR3CQ5vH1T7ymtu3zkvDUpHkDAei/ldnEKI2EHu8rRP5uXUb4vE/89/6MnUkUEQJ hMhywNS4c2To1KcI6CzLyo8qJx5S1RGVS1mxTMChQr8ahNg6I0LqBAFkB7ETD3bN2+S4 Cg== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2u0kyq017s-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 29 Jul 2019 05:13:40 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 29 Jul 2019 05:13:39 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 29 Jul 2019 05:13:39 -0700 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id 6E4633F703F; Mon, 29 Jul 2019 05:13:36 -0700 (PDT) From: To: CC: , , , , , , , Vamsi Attunuru Date: Mon, 29 Jul 2019 17:43:11 +0530 Message-ID: <20190729121313.30639-4-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20190729121313.30639-1-vattunuru@marvell.com> References: <20190723053821.30227-1-vattunuru@marvell.com> <20190729121313.30639-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:5.22.84,1.0.8 definitions=2019-07-29_06:2019-07-29,2019-07-29 signatures=0 Subject: [dpdk-dev] [PATCH v9 3/5] kni: add app specific mempool create & free routine X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru When KNI operates in IOVA = VA mode, it requires mbuf memory to be physically contiguous to ensure KNI kernel module could translate IOVA addresses properly. Patch adds a KNI specific mempool create routine to populate the KNI packet mbuf pool with memory objects that are being on a page. KNI applications need to use this mempool create & free routines so that mbuf related requirements in IOVA = VA mode are handled inside those routines based on the enabled mode. Updated the release notes with these new routine details. Signed-off-by: Vamsi Attunuru --- doc/guides/rel_notes/release_19_08.rst | 6 ++++ examples/kni/main.c | 5 ++- lib/librte_kni/Makefile | 1 + lib/librte_kni/meson.build | 1 + lib/librte_kni/rte_kni.c | 60 ++++++++++++++++++++++++++++++++++ lib/librte_kni/rte_kni.h | 48 +++++++++++++++++++++++++++ lib/librte_kni/rte_kni_version.map | 2 ++ 7 files changed, 122 insertions(+), 1 deletion(-) diff --git a/doc/guides/rel_notes/release_19_08.rst b/doc/guides/rel_notes/release_19_08.rst index c9bd3ce..b200aae 100644 --- a/doc/guides/rel_notes/release_19_08.rst +++ b/doc/guides/rel_notes/release_19_08.rst @@ -301,6 +301,12 @@ API Changes best-effort tc, and qsize field of struct ``rte_sched_port_params`` is changed to allow different size of the each queue. +* kni: ``rte_kni_pktmbuf_pool_create`` ``rte_kni_pktmbuf_pool_free`` functions + were introduced for KNI applications for creating & freeing packet pool. + Since IOVA=VA mode was added in KNI, packet pool's mbuf memory should be + physically contiguous for the KNI kernel module to work in IOVA=VA mode, + this requirement was taken care in the kni packet pool creation fucntions. + ABI Changes ----------- diff --git a/examples/kni/main.c b/examples/kni/main.c index 4710d71..fdfeed2 100644 --- a/examples/kni/main.c +++ b/examples/kni/main.c @@ -975,7 +975,7 @@ main(int argc, char** argv) rte_exit(EXIT_FAILURE, "Could not parse input parameters\n"); /* Create the mbuf pool */ - pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, + pktmbuf_pool = rte_kni_pktmbuf_pool_create("mbuf_pool", NB_MBUF, MEMPOOL_CACHE_SZ, 0, MBUF_DATA_SZ, rte_socket_id()); if (pktmbuf_pool == NULL) { rte_exit(EXIT_FAILURE, "Could not initialise mbuf pool\n"); @@ -1043,6 +1043,9 @@ main(int argc, char** argv) continue; kni_free_kni(port); } + + rte_kni_pktmbuf_pool_free(pktmbuf_pool); + for (i = 0; i < RTE_MAX_ETHPORTS; i++) if (kni_port_params_array[i]) { rte_free(kni_port_params_array[i]); diff --git a/lib/librte_kni/Makefile b/lib/librte_kni/Makefile index ab15d10..5e3dd01 100644 --- a/lib/librte_kni/Makefile +++ b/lib/librte_kni/Makefile @@ -6,6 +6,7 @@ include $(RTE_SDK)/mk/rte.vars.mk # library name LIB = librte_kni.a +CFLAGS += -DALLOW_EXPERIMENTAL_API CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 -fno-strict-aliasing CFLAGS += -I$(RTE_SDK)/drivers/bus/pci LDLIBS += -lrte_eal -lrte_mempool -lrte_mbuf -lrte_ethdev diff --git a/lib/librte_kni/meson.build b/lib/librte_kni/meson.build index fd46f87..e357445 100644 --- a/lib/librte_kni/meson.build +++ b/lib/librte_kni/meson.build @@ -1,6 +1,7 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2017 Intel Corporation +allow_experimental_apis = true if not is_linux or not dpdk_conf.get('RTE_ARCH_64') build = false reason = 'only supported on 64-bit linux' diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c index 2aaaeaa..15dda45 100644 --- a/lib/librte_kni/rte_kni.c +++ b/lib/librte_kni/rte_kni.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include "rte_kni_fifo.h" @@ -681,6 +682,65 @@ kni_allocate_mbufs(struct rte_kni *kni) } } +struct rte_mempool * +rte_kni_pktmbuf_pool_create(const char *name, unsigned int n, + unsigned int cache_size, uint16_t priv_size, uint16_t data_room_size, + int socket_id) +{ + struct rte_pktmbuf_pool_private mbp_priv; + const char *mp_ops_name; + struct rte_mempool *mp; + unsigned int elt_size; + int ret; + + if (RTE_ALIGN(priv_size, RTE_MBUF_PRIV_ALIGN) != priv_size) { + RTE_LOG(ERR, MBUF, "mbuf priv_size=%u is not aligned\n", + priv_size); + rte_errno = EINVAL; + return NULL; + } + elt_size = sizeof(struct rte_mbuf) + (unsigned int)priv_size + + (unsigned int)data_room_size; + mbp_priv.mbuf_data_room_size = data_room_size; + mbp_priv.mbuf_priv_size = priv_size; + + mp = rte_mempool_create_empty(name, n, elt_size, cache_size, + sizeof(struct rte_pktmbuf_pool_private), socket_id, 0); + if (mp == NULL) + return NULL; + + mp_ops_name = rte_mbuf_best_mempool_ops(); + ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); + if (ret != 0) { + RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); + rte_mempool_free(mp); + rte_errno = -ret; + return NULL; + } + rte_pktmbuf_pool_init(mp, &mbp_priv); + + if (rte_eal_iova_mode() == RTE_IOVA_VA) + ret = rte_mempool_populate_from_pg_sz_chunks(mp); + else + ret = rte_mempool_populate_default(mp); + + if (ret < 0) { + rte_mempool_free(mp); + rte_errno = -ret; + return NULL; + } + + rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL); + + return mp; +} + +void +rte_kni_pktmbuf_pool_free(struct rte_mempool *mp) +{ + rte_mempool_free(mp); +} + struct rte_kni * rte_kni_get(const char *name) { diff --git a/lib/librte_kni/rte_kni.h b/lib/librte_kni/rte_kni.h index 5699a64..99d263d 100644 --- a/lib/librte_kni/rte_kni.h +++ b/lib/librte_kni/rte_kni.h @@ -184,6 +184,54 @@ unsigned rte_kni_tx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned num); /** + * Create a kni packet mbuf pool. + * + * This function creates and initializes a packet mbuf pool for KNI applications + * It calls the required mempool populate routine based on the IOVA mode. + * + * @param name + * The name of the mbuf pool. + * @param n + * The number of elements in the mbuf pool. The optimum size (in terms + * of memory usage) for a mempool is when n is a power of two minus one: + * n = (2^q - 1). + * @param cache_size + * Size of the per-core object cache. See rte_mempool_create() for + * details. + * @param priv_size + * Size of application private are between the rte_mbuf structure + * and the data buffer. This value must be aligned to RTE_MBUF_PRIV_ALIGN. + * @param data_room_size + * Size of data buffer in each mbuf, including RTE_PKTMBUF_HEADROOM. + * @param socket_id + * The socket identifier where the memory should be allocated. The + * value can be *SOCKET_ID_ANY* if there is no NUMA constraint for the + * reserved zone. + * @return + * The pointer to the new allocated mempool, on success. NULL on error + * with rte_errno set appropriately. Possible rte_errno values include: + * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure + * - E_RTE_SECONDARY - function was called from a secondary process instance + * - EINVAL - cache size provided is too large, or priv_size is not aligned. + * - ENOSPC - the maximum number of memzones has already been allocated + * - EEXIST - a memzone with the same name already exists + * - ENOMEM - no appropriate memory area found in which to create memzone + */ +__rte_experimental +struct rte_mempool *rte_kni_pktmbuf_pool_create(const char *name, + unsigned int n, unsigned int cache_size, uint16_t priv_size, + uint16_t data_room_size, int socket_id); + +/** + * Free the given packet mempool. + * + * @param mp + * The mempool pointer. + */ +__rte_experimental +void rte_kni_pktmbuf_pool_free(struct rte_mempool *mp); + +/** * Get the KNI context of its name. * * @param name diff --git a/lib/librte_kni/rte_kni_version.map b/lib/librte_kni/rte_kni_version.map index c877dc6..aba9728 100644 --- a/lib/librte_kni/rte_kni_version.map +++ b/lib/librte_kni/rte_kni_version.map @@ -20,4 +20,6 @@ EXPERIMENTAL { global: rte_kni_update_link; + rte_kni_pktmbuf_pool_create; + rte_kni_pktmbuf_pool_free; }; From patchwork Mon Jul 29 12:13:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 57225 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 34D9E1BF51; Mon, 29 Jul 2019 14:13:53 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 3DE2E1BF61 for ; Mon, 29 Jul 2019 14:13:46 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x6TCAEgL025399; Mon, 29 Jul 2019 05:13:45 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=wrJ7dmEvfYWq0F0wL+W2XOIDPUfHjFpoi3HHcMqwKjg=; b=bYi+vr+RgmN/3E8AvKXazptIF6MR+y7bZ/PPLxrdDHpLEeg0xcNi7PlO2HBYDW0fWDot 9+dTMRA1ToefeRn2eAmiqdvyrvkiLbIK/GiogFOMU1SER/UfkQvVDgGzedxlk98Fbb4b Es45EwZAKZzgtQ2LcWoWJfVa0Q/XkQb12bBQrHfYzuJhMWhhxUMoQpdDpHmC5U5J0t+Z DKS+AW3pEbgIQ1MDOroVsmGMhjRNbQHpcGyRNJSJ4VFBngin9kcREpE63M//gLETKQFk Gxc2a3H1u12Le+SNFcuMDF5AJsKy927npvSFWGm7FbAeOAOp0UHlVOhRoGThtDSjjF01 oQ== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2u0p4ky2ma-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 29 Jul 2019 05:13:45 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 29 Jul 2019 05:13:43 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 29 Jul 2019 05:13:43 -0700 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id 03CC83F7040; Mon, 29 Jul 2019 05:13:39 -0700 (PDT) From: To: CC: , , , , , , , Vamsi Attunuru Date: Mon, 29 Jul 2019 17:43:12 +0530 Message-ID: <20190729121313.30639-5-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20190729121313.30639-1-vattunuru@marvell.com> References: <20190723053821.30227-1-vattunuru@marvell.com> <20190729121313.30639-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:5.22.84,1.0.8 definitions=2019-07-29_06:2019-07-29,2019-07-29 signatures=0 Subject: [dpdk-dev] [PATCH v9 4/5] kni: add IOVA=VA support in KNI module X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Patch adds support for kernel module to work in IOVA = VA mode, the idea is to get physical address from IOVA address using iommu_iova_to_phys API and later use phys_to_virt API to convert the physical address to kernel virtual address. When compared with IOVA = PA mode, there is no performance drop with this approach. This approach does not work with the kernel versions less than 4.4.0 because of API compatibility issues. Patch also updates these support details in KNI documentation. Signed-off-by: Kiran Kumar K Signed-off-by: Vamsi Attunuru --- kernel/linux/kni/compat.h | 4 +++ kernel/linux/kni/kni_dev.h | 4 +++ kernel/linux/kni/kni_misc.c | 71 +++++++++++++++++++++++++++++++++++++++------ kernel/linux/kni/kni_net.c | 59 ++++++++++++++++++++++++++++--------- 4 files changed, 116 insertions(+), 22 deletions(-) diff --git a/kernel/linux/kni/compat.h b/kernel/linux/kni/compat.h index 562d8bf..ee997a6 100644 --- a/kernel/linux/kni/compat.h +++ b/kernel/linux/kni/compat.h @@ -121,3 +121,7 @@ #if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 11, 0) #define HAVE_SIGNAL_FUNCTIONS_OWN_HEADER #endif + +#if KERNEL_VERSION(4, 4, 0) <= LINUX_VERSION_CODE +#define HAVE_IOVA_AS_VA_SUPPORT +#endif diff --git a/kernel/linux/kni/kni_dev.h b/kernel/linux/kni/kni_dev.h index c1ca678..d5898f3 100644 --- a/kernel/linux/kni/kni_dev.h +++ b/kernel/linux/kni/kni_dev.h @@ -25,6 +25,7 @@ #include #include #include +#include #include #define KNI_KTHREAD_RESCHEDULE_INTERVAL 5 /* us */ @@ -41,6 +42,9 @@ struct kni_dev { /* kni list */ struct list_head list; + uint8_t iova_mode; + struct iommu_domain *domain; + uint32_t core_id; /* Core ID to bind */ char name[RTE_KNI_NAMESIZE]; /* Network device name */ struct task_struct *pthread; diff --git a/kernel/linux/kni/kni_misc.c b/kernel/linux/kni/kni_misc.c index 2b75502..8660205 100644 --- a/kernel/linux/kni/kni_misc.c +++ b/kernel/linux/kni/kni_misc.c @@ -295,6 +295,9 @@ kni_ioctl_create(struct net *net, uint32_t ioctl_num, struct rte_kni_device_info dev_info; struct net_device *net_dev = NULL; struct kni_dev *kni, *dev, *n; + struct pci_dev *pci = NULL; + struct iommu_domain *domain = NULL; + phys_addr_t phys_addr; pr_info("Creating kni...\n"); /* Check the buffer size, to avoid warning */ @@ -348,15 +351,65 @@ kni_ioctl_create(struct net *net, uint32_t ioctl_num, strncpy(kni->name, dev_info.name, RTE_KNI_NAMESIZE); /* Translate user space info into kernel space info */ - kni->tx_q = phys_to_virt(dev_info.tx_phys); - kni->rx_q = phys_to_virt(dev_info.rx_phys); - kni->alloc_q = phys_to_virt(dev_info.alloc_phys); - kni->free_q = phys_to_virt(dev_info.free_phys); - - kni->req_q = phys_to_virt(dev_info.req_phys); - kni->resp_q = phys_to_virt(dev_info.resp_phys); - kni->sync_va = dev_info.sync_va; - kni->sync_kva = phys_to_virt(dev_info.sync_phys); + if (dev_info.iova_mode) { +#ifdef HAVE_IOVA_AS_VA_SUPPORT + pci = pci_get_device(dev_info.vendor_id, + dev_info.device_id, NULL); + if (pci == NULL) { + pr_err("pci dev does not exist\n"); + return -ENODEV; + } + + while (pci) { + if ((pci->bus->number == dev_info.bus) && + (PCI_SLOT(pci->devfn) == dev_info.devid) && + (PCI_FUNC(pci->devfn) == dev_info.function)) { + domain = iommu_get_domain_for_dev(&pci->dev); + break; + } + pci = pci_get_device(dev_info.vendor_id, + dev_info.device_id, pci); + } + + if (domain == NULL) { + pr_err("Failed to get pci dev domain info\n"); + return -ENODEV; + } +#else + pr_err("Kernel version does not support IOVA as VA\n"); + return -EINVAL; +#endif + kni->domain = domain; + phys_addr = iommu_iova_to_phys(domain, dev_info.tx_phys); + kni->tx_q = phys_to_virt(phys_addr); + phys_addr = iommu_iova_to_phys(domain, dev_info.rx_phys); + kni->rx_q = phys_to_virt(phys_addr); + phys_addr = iommu_iova_to_phys(domain, dev_info.alloc_phys); + kni->alloc_q = phys_to_virt(phys_addr); + phys_addr = iommu_iova_to_phys(domain, dev_info.free_phys); + kni->free_q = phys_to_virt(phys_addr); + phys_addr = iommu_iova_to_phys(domain, dev_info.req_phys); + kni->req_q = phys_to_virt(phys_addr); + phys_addr = iommu_iova_to_phys(domain, dev_info.resp_phys); + kni->resp_q = phys_to_virt(phys_addr); + kni->sync_va = dev_info.sync_va; + phys_addr = iommu_iova_to_phys(domain, dev_info.sync_phys); + kni->sync_kva = phys_to_virt(phys_addr); + kni->iova_mode = 1; + + } else { + + kni->tx_q = phys_to_virt(dev_info.tx_phys); + kni->rx_q = phys_to_virt(dev_info.rx_phys); + kni->alloc_q = phys_to_virt(dev_info.alloc_phys); + kni->free_q = phys_to_virt(dev_info.free_phys); + + kni->req_q = phys_to_virt(dev_info.req_phys); + kni->resp_q = phys_to_virt(dev_info.resp_phys); + kni->sync_va = dev_info.sync_va; + kni->sync_kva = phys_to_virt(dev_info.sync_phys); + kni->iova_mode = 0; + } kni->mbuf_size = dev_info.mbuf_size; diff --git a/kernel/linux/kni/kni_net.c b/kernel/linux/kni/kni_net.c index 7bd3a9f..8382859 100644 --- a/kernel/linux/kni/kni_net.c +++ b/kernel/linux/kni/kni_net.c @@ -36,6 +36,21 @@ static void kni_net_rx_normal(struct kni_dev *kni); /* kni rx function pointer, with default to normal rx */ static kni_net_rx_t kni_net_rx_func = kni_net_rx_normal; +/* iova to kernel virtual address */ +static inline void * +iova2kva(struct kni_dev *kni, void *pa) +{ + return phys_to_virt(iommu_iova_to_phys(kni->domain, + (uintptr_t)pa)); +} + +static inline void * +iova2data_kva(struct kni_dev *kni, struct rte_kni_mbuf *m) +{ + return phys_to_virt(iommu_iova_to_phys(kni->domain, + (uintptr_t)m->buf_physaddr) + m->data_off); +} + /* physical address to kernel virtual address */ static void * pa2kva(void *pa) @@ -62,6 +77,24 @@ kva2data_kva(struct rte_kni_mbuf *m) return phys_to_virt(m->buf_physaddr + m->data_off); } +static inline void * +get_kva(struct kni_dev *kni, void *pa) +{ + if (kni->iova_mode == 1) + return iova2kva(kni, pa); + + return pa2kva(pa); +} + +static inline void * +get_data_kva(struct kni_dev *kni, void *pkt_kva) +{ + if (kni->iova_mode == 1) + return iova2data_kva(kni, pkt_kva); + + return kva2data_kva(pkt_kva); +} + /* * It can be called to process the request. */ @@ -178,7 +211,7 @@ kni_fifo_trans_pa2va(struct kni_dev *kni, return; for (i = 0; i < num_rx; i++) { - kva = pa2kva(kni->pa[i]); + kva = get_kva(kni, kni->pa[i]); kni->va[i] = pa2va(kni->pa[i], kva); kva_nb_segs = kva->nb_segs; @@ -266,8 +299,8 @@ kni_net_tx(struct sk_buff *skb, struct net_device *dev) if (likely(ret == 1)) { void *data_kva; - pkt_kva = pa2kva(pkt_pa); - data_kva = kva2data_kva(pkt_kva); + pkt_kva = get_kva(kni, pkt_pa); + data_kva = get_data_kva(kni, pkt_kva); pkt_va = pa2va(pkt_pa, pkt_kva); len = skb->len; @@ -338,9 +371,9 @@ kni_net_rx_normal(struct kni_dev *kni) /* Transfer received packets to netif */ for (i = 0; i < num_rx; i++) { - kva = pa2kva(kni->pa[i]); + kva = get_kva(kni, kni->pa[i]); len = kva->pkt_len; - data_kva = kva2data_kva(kva); + data_kva = get_data_kva(kni, kva); kni->va[i] = pa2va(kni->pa[i], kva); skb = netdev_alloc_skb(dev, len); @@ -437,9 +470,9 @@ kni_net_rx_lo_fifo(struct kni_dev *kni) num = ret; /* Copy mbufs */ for (i = 0; i < num; i++) { - kva = pa2kva(kni->pa[i]); + kva = get_kva(kni, kni->pa[i]); len = kva->data_len; - data_kva = kva2data_kva(kva); + data_kva = get_data_kva(kni, kva); kni->va[i] = pa2va(kni->pa[i], kva); while (kva->next) { @@ -449,8 +482,8 @@ kni_net_rx_lo_fifo(struct kni_dev *kni) kva = next_kva; } - alloc_kva = pa2kva(kni->alloc_pa[i]); - alloc_data_kva = kva2data_kva(alloc_kva); + alloc_kva = get_kva(kni, kni->alloc_pa[i]); + alloc_data_kva = get_data_kva(kni, alloc_kva); kni->alloc_va[i] = pa2va(kni->alloc_pa[i], alloc_kva); memcpy(alloc_data_kva, data_kva, len); @@ -517,9 +550,9 @@ kni_net_rx_lo_fifo_skb(struct kni_dev *kni) /* Copy mbufs to sk buffer and then call tx interface */ for (i = 0; i < num; i++) { - kva = pa2kva(kni->pa[i]); + kva = get_kva(kni, kni->pa[i]); len = kva->pkt_len; - data_kva = kva2data_kva(kva); + data_kva = get_data_kva(kni, kva); kni->va[i] = pa2va(kni->pa[i], kva); skb = netdev_alloc_skb(dev, len); @@ -550,8 +583,8 @@ kni_net_rx_lo_fifo_skb(struct kni_dev *kni) break; prev_kva = kva; - kva = pa2kva(kva->next); - data_kva = kva2data_kva(kva); + kva = get_kva(kni, kva->next); + data_kva = get_data_kva(kni, kva); /* Convert physical address to virtual address */ prev_kva->next = pa2va(prev_kva->next, kva); } From patchwork Mon Jul 29 12:13:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 57226 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1C2711BF6E; Mon, 29 Jul 2019 14:13:55 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 611D71BF51 for ; Mon, 29 Jul 2019 14:13:49 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x6TCAOPx027447; Mon, 29 Jul 2019 05:13:47 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=gpbLxqJAkOvx79mmhbKBWdL78pgaTGoUMwxzEdCMjXo=; b=K8ixG2j4/On8LAZFz003fjlx2UxUYxolVoUEZRJvsb6kRiNRxofTA5O+NuDtivXasNJv AZ3DSGTMAt5vzK/J8swQB27Xm7RzAWGEG/LCNAdW8kNUFZx2ucinF7ZR6mOjAoV0kk0C Ct+YlzWTyBwIBrhEyvSUwmPBk+NtsY6btqBgNRNUrjClR5QEeAGbgiSH/Us6cZjvVrPG w+M3qhWJmbwN8AddmCrgDsMlTluB1svDoPQCrfHAJgALfHIJvIm4R0egSFBTdfFYnhhK 61dzOhM3vDEh7tQlGG3NIcPuPdHJDBZGm/JuVAK5CaQb2cT0iD+3Dtzlk7peAs+d4i1a pw== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2u0kyq0183-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 29 Jul 2019 05:13:47 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 29 Jul 2019 05:13:46 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 29 Jul 2019 05:13:46 -0700 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id 81B0B3F703F; Mon, 29 Jul 2019 05:13:43 -0700 (PDT) From: To: CC: , , , , , , , Vamsi Attunuru Date: Mon, 29 Jul 2019 17:43:13 +0530 Message-ID: <20190729121313.30639-6-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20190729121313.30639-1-vattunuru@marvell.com> References: <20190723053821.30227-1-vattunuru@marvell.com> <20190729121313.30639-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:5.22.84,1.0.8 definitions=2019-07-29_06:2019-07-29,2019-07-29 signatures=0 Subject: [dpdk-dev] [PATCH v9 5/5] kni: modify IOVA mode checks to support VA X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Patch addresses checks in KNI and eal that enforce IOVA=PA when IOVA=VA mode is enabled, since KNI kernel module supports VA mode for kernel versions >= 4.4.0. Updated KNI documentation with above details. Signed-off-by: Vamsi Attunuru Signed-off-by: Kiran Kumar K --- doc/guides/prog_guide/kernel_nic_interface.rst | 8 ++++++++ lib/librte_eal/linux/eal/eal.c | 4 +++- lib/librte_kni/rte_kni.c | 5 ----- 3 files changed, 11 insertions(+), 6 deletions(-) diff --git a/doc/guides/prog_guide/kernel_nic_interface.rst b/doc/guides/prog_guide/kernel_nic_interface.rst index 38369b3..fd2ce63 100644 --- a/doc/guides/prog_guide/kernel_nic_interface.rst +++ b/doc/guides/prog_guide/kernel_nic_interface.rst @@ -291,6 +291,14 @@ The sk_buff is then freed and the mbuf sent in the tx_q FIFO. The DPDK TX thread dequeues the mbuf and sends it to the PMD via ``rte_eth_tx_burst()``. It then puts the mbuf back in the cache. +IOVA = VA: Support +------------------ + +KNI can be operated in IOVA as VA scheme when following criteria are fullfilled + +- LINUX_VERSION_CODE >= KERNEL_VERSION(4, 4, 0) +- eal param `--iova-mode va` is passed or bus IOVA scheme is set to RTE_IOVA_VA + Ethtool ------- diff --git a/lib/librte_eal/linux/eal/eal.c b/lib/librte_eal/linux/eal/eal.c index 34db787..428425d 100644 --- a/lib/librte_eal/linux/eal/eal.c +++ b/lib/librte_eal/linux/eal/eal.c @@ -1068,12 +1068,14 @@ rte_eal_init(int argc, char **argv) /* Workaround for KNI which requires physical address to work */ if (iova_mode == RTE_IOVA_VA && rte_eal_check_module("rte_kni") == 1) { +#if KERNEL_VERSION(4, 4, 0) > LINUX_VERSION_CODE if (phys_addrs) { iova_mode = RTE_IOVA_PA; - RTE_LOG(WARNING, EAL, "Forcing IOVA as 'PA' because KNI module is loaded\n"); + RTE_LOG(WARNING, EAL, "Forcing IOVA as 'PA' because KNI module does not support VA\n"); } else { RTE_LOG(DEBUG, EAL, "KNI can not work since physical addresses are unavailable\n"); } +#endif } #endif rte_eal_get_configuration()->iova_mode = iova_mode; diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c index 15dda45..c77d76f 100644 --- a/lib/librte_kni/rte_kni.c +++ b/lib/librte_kni/rte_kni.c @@ -99,11 +99,6 @@ static volatile int kni_fd = -1; int rte_kni_init(unsigned int max_kni_ifaces __rte_unused) { - if (rte_eal_iova_mode() != RTE_IOVA_PA) { - RTE_LOG(ERR, KNI, "KNI requires IOVA as PA\n"); - return -1; - } - /* Check FD and open */ if (kni_fd < 0) { kni_fd = open("/dev/" KNI_DEVICE, O_RDWR);