get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/74515/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 74515,
    "url": "http://patchwork.dpdk.org/api/patches/74515/?format=api",
    "web_url": "http://patchwork.dpdk.org/project/dpdk/patch/20200721054720.3417804-1-patrick.fu@intel.com/",
    "project": {
        "id": 1,
        "url": "http://patchwork.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<20200721054720.3417804-1-patrick.fu@intel.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/20200721054720.3417804-1-patrick.fu@intel.com",
    "date": "2020-07-21T05:47:20",
    "name": "[v3] vhost: fix wrong async completion of multi-seg packets",
    "commit_ref": null,
    "pull_url": null,
    "state": "accepted",
    "archived": true,
    "hash": "adeb97cb08cc9e0c7f6706d94dc2348c145b99dd",
    "submitter": {
        "id": 1781,
        "url": "http://patchwork.dpdk.org/api/people/1781/?format=api",
        "name": "Patrick Fu",
        "email": "patrick.fu@intel.com"
    },
    "delegate": {
        "id": 319,
        "url": "http://patchwork.dpdk.org/api/users/319/?format=api",
        "username": "fyigit",
        "first_name": "Ferruh",
        "last_name": "Yigit",
        "email": "ferruh.yigit@amd.com"
    },
    "mbox": "http://patchwork.dpdk.org/project/dpdk/patch/20200721054720.3417804-1-patrick.fu@intel.com/mbox/",
    "series": [
        {
            "id": 11186,
            "url": "http://patchwork.dpdk.org/api/series/11186/?format=api",
            "web_url": "http://patchwork.dpdk.org/project/dpdk/list/?series=11186",
            "date": "2020-07-21T05:47:20",
            "name": "[v3] vhost: fix wrong async completion of multi-seg packets",
            "version": 3,
            "mbox": "http://patchwork.dpdk.org/series/11186/mbox/"
        }
    ],
    "comments": "http://patchwork.dpdk.org/api/patches/74515/comments/",
    "check": "success",
    "checks": "http://patchwork.dpdk.org/api/patches/74515/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from dpdk.org (dpdk.org [92.243.14.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 4E7DBA0526;\n\tTue, 21 Jul 2020 07:49:55 +0200 (CEST)",
            "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id CABFA1BFFE;\n\tTue, 21 Jul 2020 07:49:53 +0200 (CEST)",
            "from mga03.intel.com (mga03.intel.com [134.134.136.65])\n by dpdk.org (Postfix) with ESMTP id 0052D1BFEB\n for <dev@dpdk.org>; Tue, 21 Jul 2020 07:49:51 +0200 (CEST)",
            "from orsmga007.jf.intel.com ([10.7.209.58])\n by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 20 Jul 2020 22:49:50 -0700",
            "from npg-dpdk-patrickfu-casc2.sh.intel.com ([10.67.119.92])\n by orsmga007.jf.intel.com with ESMTP; 20 Jul 2020 22:49:49 -0700"
        ],
        "IronPort-SDR": [
            "\n v79IqzDQRkaixy13JWEZ4OoI2ftNJ4zB9UWEdOwwf/VmVGDeRpY4XuSVWUFl0EfwiNrelk6fmV\n FQE7NJv4iNQg==",
            "\n NGjnD5afHMeGaIQqHPfS0nldIJUCyzgBI2ZZ9awBa/EL2tvIUtdRZvlKXsvQDrxGMSbDUgBi8E\n EBIgmAS3qEGw=="
        ],
        "X-IronPort-AV": [
            "E=McAfee;i=\"6000,8403,9688\"; a=\"150053814\"",
            "E=Sophos;i=\"5.75,377,1589266800\"; d=\"scan'208\";a=\"150053814\"",
            "E=Sophos;i=\"5.75,377,1589266800\"; d=\"scan'208\";a=\"327762198\""
        ],
        "X-Amp-Result": "SKIPPED(no attachment in message)",
        "X-Amp-File-Uploaded": "False",
        "X-ExtLoop1": "1",
        "From": "patrick.fu@intel.com",
        "To": "dev@dpdk.org,\n\tmaxime.coquelin@redhat.com,\n\tchenbo.xia@intel.com",
        "Cc": "Patrick Fu <patrick.fu@intel.com>",
        "Date": "Tue, 21 Jul 2020 13:47:20 +0800",
        "Message-Id": "<20200721054720.3417804-1-patrick.fu@intel.com>",
        "X-Mailer": "git-send-email 2.18.4",
        "In-Reply-To": "<20200715074650.2375332-1-patrick.fu@intel.com>",
        "References": "<20200715074650.2375332-1-patrick.fu@intel.com>",
        "Subject": "[dpdk-dev] [PATCH v3] vhost: fix wrong async completion of\n\tmulti-seg packets",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "From: Patrick Fu <patrick.fu@intel.com>\n\nIn async enqueue copy, a packet could be split into multiple copy\nsegments. When polling the copy completion status, current async data\npath assumes the async device callbacks are aware of the packet\nboundary and return completed segments only if all segments belonging\nto the same packet are done. Such assumption are not generic to common\nasync devices and may degrees the copy performance if async callbacks\nhave to implement it in software manner.\n\nThis patch adds tracking of the completed copy segments at vhost side.\nIf async copy device reports partial completion of a packets, only\nvhost internal record is updated and vring status keeps unchanged\nuntil remaining segments of the packet are also finished. The async\ncopy device is no longer necessary to care about the packet boundary.\n\nFixes: cd6760da1076 (\"vhost: introduce async enqueue for split ring\")\n\nSigned-off-by: Patrick Fu <patrick.fu@intel.com>\n---\nv2:\n - fix an issue that can stuck async poll when packets buffer is full\nv3:\n - revise commit message and title\n - rename a local variable to better reflect its usage\n\n lib/librte_vhost/vhost.h      |  3 +++\n lib/librte_vhost/virtio_net.c | 27 +++++++++++++++++----------\n 2 files changed, 20 insertions(+), 10 deletions(-)",
    "diff": "diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h\nindex 8c01cee42..0f7212f88 100644\n--- a/lib/librte_vhost/vhost.h\n+++ b/lib/librte_vhost/vhost.h\n@@ -46,6 +46,8 @@\n \n #define MAX_PKT_BURST 32\n \n+#define ASYNC_MAX_POLL_SEG 255\n+\n #define VHOST_MAX_ASYNC_IT (MAX_PKT_BURST * 2)\n #define VHOST_MAX_ASYNC_VEC (BUF_VECTOR_MAX * 2)\n \n@@ -225,6 +227,7 @@ struct vhost_virtqueue {\n \tuint64_t\t*async_pending_info;\n \tuint16_t\tasync_pkts_idx;\n \tuint16_t\tasync_pkts_inflight_n;\n+\tuint16_t\tasync_last_seg_n;\n \n \t/* vq async features */\n \tbool\t\tasync_inorder;\ndiff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c\nindex 1d0be3dd4..635113cb0 100644\n--- a/lib/librte_vhost/virtio_net.c\n+++ b/lib/librte_vhost/virtio_net.c\n@@ -1631,8 +1631,9 @@ uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,\n {\n \tstruct virtio_net *dev = get_device(vid);\n \tstruct vhost_virtqueue *vq;\n-\tuint16_t n_pkts_cpl, n_pkts_put = 0, n_descs = 0;\n+\tuint16_t n_segs_cpl, n_pkts_put = 0, n_descs = 0;\n \tuint16_t start_idx, pkts_idx, vq_size;\n+\tuint16_t n_inflight;\n \tuint64_t *async_pending_info;\n \n \tVHOST_LOG_DATA(DEBUG, \"(%d) %s\\n\", dev->vid, __func__);\n@@ -1646,46 +1647,52 @@ uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,\n \n \trte_spinlock_lock(&vq->access_lock);\n \n+\tn_inflight = vq->async_pkts_inflight_n;\n \tpkts_idx = vq->async_pkts_idx;\n \tasync_pending_info = vq->async_pending_info;\n \tvq_size = vq->size;\n \tstart_idx = virtio_dev_rx_async_get_info_idx(pkts_idx,\n \t\tvq_size, vq->async_pkts_inflight_n);\n \n-\tn_pkts_cpl =\n-\t\tvq->async_ops.check_completed_copies(vid, queue_id, 0, count);\n+\tn_segs_cpl = vq->async_ops.check_completed_copies(vid, queue_id,\n+\t\t0, ASYNC_MAX_POLL_SEG - vq->async_last_seg_n) +\n+\t\tvq->async_last_seg_n;\n \n \trte_smp_wmb();\n \n-\twhile (likely(((start_idx + n_pkts_put) & (vq_size - 1)) != pkts_idx)) {\n+\twhile (likely((n_pkts_put < count) && n_inflight)) {\n \t\tuint64_t info = async_pending_info[\n \t\t\t(start_idx + n_pkts_put) & (vq_size - 1)];\n \t\tuint64_t n_segs;\n \t\tn_pkts_put++;\n+\t\tn_inflight--;\n \t\tn_descs += info & ASYNC_PENDING_INFO_N_MSK;\n \t\tn_segs = info >> ASYNC_PENDING_INFO_N_SFT;\n \n \t\tif (n_segs) {\n-\t\t\tif (!n_pkts_cpl || n_pkts_cpl < n_segs) {\n+\t\t\tif (unlikely(n_segs_cpl < n_segs)) {\n \t\t\t\tn_pkts_put--;\n+\t\t\t\tn_inflight++;\n \t\t\t\tn_descs -= info & ASYNC_PENDING_INFO_N_MSK;\n-\t\t\t\tif (n_pkts_cpl) {\n+\t\t\t\tif (n_segs_cpl) {\n \t\t\t\t\tasync_pending_info[\n \t\t\t\t\t\t(start_idx + n_pkts_put) &\n \t\t\t\t\t\t(vq_size - 1)] =\n-\t\t\t\t\t((n_segs - n_pkts_cpl) <<\n+\t\t\t\t\t((n_segs - n_segs_cpl) <<\n \t\t\t\t\t ASYNC_PENDING_INFO_N_SFT) |\n \t\t\t\t\t(info & ASYNC_PENDING_INFO_N_MSK);\n-\t\t\t\t\tn_pkts_cpl = 0;\n+\t\t\t\t\tn_segs_cpl = 0;\n \t\t\t\t}\n \t\t\t\tbreak;\n \t\t\t}\n-\t\t\tn_pkts_cpl -= n_segs;\n+\t\t\tn_segs_cpl -= n_segs;\n \t\t}\n \t}\n \n+\tvq->async_last_seg_n = n_segs_cpl;\n+\n \tif (n_pkts_put) {\n-\t\tvq->async_pkts_inflight_n -= n_pkts_put;\n+\t\tvq->async_pkts_inflight_n = n_inflight;\n \t\t__atomic_add_fetch(&vq->used->idx, n_descs, __ATOMIC_RELEASE);\n \n \t\tvhost_vring_call_split(dev, vq);\n",
    "prefixes": [
        "v3"
    ]
}