Message ID | 20220419034323.92820-1-xuan.ding@intel.com (mailing list archive) |
---|---|
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BEF20A00C3; Tue, 19 Apr 2022 05:44:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6F3ED40687; Tue, 19 Apr 2022 05:44:53 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id A6E7C40150 for <dev@dpdk.org>; Tue, 19 Apr 2022 05:44:51 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650339891; x=1681875891; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=e2YBDJZxxtpR0ocGBJbkHTSogepBTu4Uc7yHeVn4JYM=; b=OE5fLWis901l/Zy2PiFx5FvbFftrvx/MRTAmzTcRxG+VnIoryk906G8D RBeekscDNh1atT4Fp8X75B+ZpYRiN1jNWEfPgtO6xwJZ0VWFrB7o3BRQU Jv4YpDipA8uQXvMH86rmY1ZQhf0+HpjJD9uPQC0QWO1lwDqRoOJkjHKB3 X1KWd9uh5Sh7M2qT2zq7YUg0+Yv4uosw1G/YujEGtW7PAmPNwVTM4I1mP kc4bmuJEMVFASebqFWnFXvXOmNNmNRC7cP4z00PrmRUu+QF3xbXv/8uWE gykDTem4UBu4tccgsrvtizj9Efrnqie2bsRWsOWz2MFJ5aKeyN8uAey3c w==; X-IronPort-AV: E=McAfee;i="6400,9594,10321"; a="263425271" X-IronPort-AV: E=Sophos;i="5.90,271,1643702400"; d="scan'208";a="263425271" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2022 20:44:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,271,1643702400"; d="scan'208";a="529133985" Received: from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.110.228]) by orsmga006.jf.intel.com with ESMTP; 18 Apr 2022 20:44:47 -0700 From: xuan.ding@intel.com To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, cheng1.jiang@intel.com, sunil.pai.g@intel.com, liangma@liangbit.com, Xuan Ding <xuan.ding@intel.com> Subject: [PATCH v3 0/5] vhost: support async dequeue data path Date: Tue, 19 Apr 2022 03:43:18 +0000 Message-Id: <20220419034323.92820-1-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220407152546.38167-1-xuan.ding@intel.com> References: <20220407152546.38167-1-xuan.ding@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions <dev.dpdk.org> List-Unsubscribe: <https://mails.dpdk.org/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://mails.dpdk.org/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <https://mails.dpdk.org/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org |
Series |
vhost: support async dequeue data path
|
|
Message
Ding, Xuan
April 19, 2022, 3:43 a.m. UTC
From: Xuan Ding <xuan.ding@intel.com>
The presence of asynchronous path allows applications to offload memory
copies to DMA engine, so as to save CPU cycles and improve the copy
performance. This patch set implements vhost async dequeue data path
for split ring. The code is based on latest enqueue changes [1].
This patch set is a new design and implementation of [2]. Since dmadev
was introduced in DPDK 21.11, to simplify application logics, this patch
integrates dmadev in vhost. With dmadev integrated, vhost supports M:N
mapping between vrings and DMA virtual channels. Specifically, one vring
can use multiple different DMA channels and one DMA channel can be
shared by multiple vrings at the same time.
A new asynchronous dequeue function is introduced:
1) rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts,
uint16_t count, int *nr_inflight,
uint16_t dma_id, uint16_t vchan_id)
Receive packets from the guest and offloads copies to DMA
virtual channel.
[1] https://mails.dpdk.org/archives/dev/2022-February/234555.html
[2] https://mails.dpdk.org/archives/dev/2021-September/218591.html
v2->v3:
* fix mbuf not updated correctly for large packets
v1->v2:
* fix a typo
* fix a bug in desc_to_mbuf filling
RFC v3 -> v1:
* add sync and async path descriptor to mbuf refactoring
* add API description in docs
RFC v2 -> RFC v3:
* rebase to latest DPDK version
RFC v1 -> RFC v2:
* fix one bug in example
* rename vchan to vchan_id
* check if dma_id and vchan_id valid
* rework all the logs to new standard
Xuan Ding (5):
vhost: prepare sync for descriptor to mbuf refactoring
vhost: prepare async for descriptor to mbuf refactoring
vhost: merge sync and async descriptor to mbuf filling
vhost: support async dequeue for split ring
examples/vhost: support async dequeue data path
doc/guides/prog_guide/vhost_lib.rst | 7 +
doc/guides/rel_notes/release_22_07.rst | 4 +
doc/guides/sample_app_ug/vhost.rst | 9 +-
examples/vhost/main.c | 292 ++++++++++-----
examples/vhost/main.h | 35 +-
examples/vhost/virtio_net.c | 16 +-
lib/vhost/rte_vhost_async.h | 33 ++
lib/vhost/version.map | 3 +
lib/vhost/vhost.h | 1 +
lib/vhost/virtio_net.c | 470 ++++++++++++++++++++++---
10 files changed, 720 insertions(+), 150 deletions(-)