From patchwork Tue Feb 19 10:59:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tiwei Bie X-Patchwork-Id: 50362 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4B0BE1B123; Tue, 19 Feb 2019 12:02:43 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id CD1AE5B26; Tue, 19 Feb 2019 12:02:39 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Feb 2019 03:02:38 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,388,1544515200"; d="scan'208";a="135430791" Received: from dpdk-tbie.sh.intel.com ([10.67.104.173]) by orsmga002.jf.intel.com with ESMTP; 19 Feb 2019 03:02:37 -0800 From: Tiwei Bie To: maxime.coquelin@redhat.com, zhihong.wang@intel.com, dev@dpdk.org Cc: stable@dpdk.org Date: Tue, 19 Feb 2019 18:59:48 +0800 Message-Id: <20190219105951.31046-3-tiwei.bie@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190219105951.31046-1-tiwei.bie@intel.com> References: <20190219105951.31046-1-tiwei.bie@intel.com> Subject: [dpdk-dev] [PATCH 2/5] net/virtio: fix in-order Tx path for split ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When IN_ORDER feature is negotiated, device may just write out a single used ring entry for a batch of buffers: """ Some devices always use descriptors in the same order in which they have been made available. These devices can offer the VIRTIO_F_IN_ORDER feature. If negotiated, this knowledge allows devices to notify the use of a batch of buffers to the driver by only writing out a single used ring entry with the id corresponding to the head entry of the descriptor chain describing the last buffer in the batch. The device then skips forward in the ring according to the size of the batch. Accordingly, it increments the used idx by the size of the batch. The driver needs to look up the used id and calculate the batch size to be able to advance to where the next used ring entry will be written by the device. """ Currently, the in-order Tx path in split ring can't handle this. With this patch, driver will allocate desc_extra[] based on the index in avail/used ring instead of the index in descriptor table. And driver can just relay on the used->idx written by device to reclaim the descriptors and Tx buffers. Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx") Cc: stable@dpdk.org Signed-off-by: Tiwei Bie Reviewed-by: Maxime Coquelin --- drivers/net/virtio/virtio_rxtx.c | 28 ++++++++++------------------ 1 file changed, 10 insertions(+), 18 deletions(-) diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index b07ceac6d..407f58bce 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -279,7 +279,7 @@ virtio_xmit_cleanup(struct virtqueue *vq, uint16_t num) static void virtio_xmit_cleanup_inorder(struct virtqueue *vq, uint16_t num) { - uint16_t i, used_idx, desc_idx = 0, last_idx; + uint16_t i, idx = vq->vq_used_cons_idx; int16_t free_cnt = 0; struct vq_desc_extra *dxp = NULL; @@ -287,27 +287,16 @@ virtio_xmit_cleanup_inorder(struct virtqueue *vq, uint16_t num) return; for (i = 0; i < num; i++) { - struct vring_used_elem *uep; - - used_idx = vq->vq_used_cons_idx & (vq->vq_nentries - 1); - uep = &vq->vq_ring.used->ring[used_idx]; - desc_idx = (uint16_t)uep->id; - - dxp = &vq->vq_descx[desc_idx]; - vq->vq_used_cons_idx++; - + dxp = &vq->vq_descx[idx++ & (vq->vq_nentries - 1)]; + free_cnt += dxp->ndescs; if (dxp->cookie != NULL) { rte_pktmbuf_free(dxp->cookie); dxp->cookie = NULL; } } - last_idx = desc_idx + dxp->ndescs - 1; - free_cnt = last_idx - vq->vq_desc_tail_idx; - if (free_cnt <= 0) - free_cnt += vq->vq_nentries; - - vq_ring_free_inorder(vq, last_idx, free_cnt); + vq->vq_free_cnt += free_cnt; + vq->vq_used_cons_idx = idx; } static inline int @@ -556,7 +545,7 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq, while (i < num) { idx = idx & (vq->vq_nentries - 1); - dxp = &vq->vq_descx[idx]; + dxp = &vq->vq_descx[vq->vq_avail_idx & (vq->vq_nentries - 1)]; dxp->cookie = (void *)cookies[i]; dxp->ndescs = 1; @@ -708,7 +697,10 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie, head_idx = vq->vq_desc_head_idx; idx = head_idx; - dxp = &vq->vq_descx[idx]; + if (in_order) + dxp = &vq->vq_descx[vq->vq_avail_idx & (vq->vq_nentries - 1)]; + else + dxp = &vq->vq_descx[idx]; dxp->cookie = (void *)cookie; dxp->ndescs = needed;