From patchwork Tue Feb 19 10:59:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tiwei Bie X-Patchwork-Id: 50365 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E5B031B1CE; Tue, 19 Feb 2019 12:02:46 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id BAEB57CB0 for ; Tue, 19 Feb 2019 12:02:42 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Feb 2019 03:02:41 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,388,1544515200"; d="scan'208";a="135430806" Received: from dpdk-tbie.sh.intel.com ([10.67.104.173]) by orsmga002.jf.intel.com with ESMTP; 19 Feb 2019 03:02:41 -0800 From: Tiwei Bie To: maxime.coquelin@redhat.com, zhihong.wang@intel.com, dev@dpdk.org Date: Tue, 19 Feb 2019 18:59:51 +0800 Message-Id: <20190219105951.31046-6-tiwei.bie@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190219105951.31046-1-tiwei.bie@intel.com> References: <20190219105951.31046-1-tiwei.bie@intel.com> Subject: [dpdk-dev] [PATCH 5/5] net/virtio: optimize xmit enqueue for packed ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch introduces an optimized enqueue function in packed ring for the case that virtio net header can be prepended to the unchained mbuf. Signed-off-by: Tiwei Bie Reviewed-by: Maxime Coquelin --- drivers/net/virtio/virtio_rxtx.c | 63 +++++++++++++++++++++++++++++++- 1 file changed, 61 insertions(+), 2 deletions(-) diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 60fa3aa50..771d3c3f6 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -623,6 +623,62 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq, vq->vq_desc_head_idx = idx & (vq->vq_nentries - 1); } +static inline void +virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq, + struct rte_mbuf *cookie, + int in_order) +{ + struct virtqueue *vq = txvq->vq; + struct vring_packed_desc *dp; + struct vq_desc_extra *dxp; + uint16_t idx, id, flags; + uint16_t head_size = vq->hw->vtnet_hdr_size; + struct virtio_net_hdr *hdr; + + id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx; + idx = vq->vq_avail_idx; + dp = &vq->ring_packed.desc_packed[idx]; + + dxp = &vq->vq_descx[id]; + dxp->ndescs = 1; + dxp->cookie = cookie; + + flags = vq->avail_used_flags; + + /* prepend cannot fail, checked by caller */ + hdr = (struct virtio_net_hdr *) + rte_pktmbuf_prepend(cookie, head_size); + cookie->pkt_len -= head_size; + + /* if offload disabled, hdr is not zeroed yet, do it now */ + if (!vq->hw->has_tx_offload) + virtqueue_clear_net_hdr(hdr); + else + virtqueue_xmit_offload(hdr, cookie, true); + + dp->addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq); + dp->len = cookie->data_len; + dp->id = id; + + if (++vq->vq_avail_idx >= vq->vq_nentries) { + vq->vq_avail_idx -= vq->vq_nentries; + vq->avail_wrap_counter ^= 1; + vq->avail_used_flags ^= + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1); + } + + vq->vq_free_cnt--; + + if (!in_order) { + vq->vq_desc_head_idx = dxp->next; + if (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = VQ_RING_DESC_CHAIN_END; + } + + virtio_wmb(vq->hw->weak_barriers); + dp->flags = flags; +} + static inline void virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie, uint16_t needed, int can_push, int in_order) @@ -1979,8 +2035,11 @@ virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, } /* Enqueue Packet buffers */ - virtqueue_enqueue_xmit_packed(txvq, txm, slots, can_push, - in_order); + if (can_push) + virtqueue_enqueue_xmit_packed_fast(txvq, txm, in_order); + else + virtqueue_enqueue_xmit_packed(txvq, txm, slots, 0, + in_order); virtio_update_packet_stats(&txvq->stats, txm); }