From patchwork Tue Mar 19 06:43:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tiwei Bie X-Patchwork-Id: 51318 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CC697493D; Tue, 19 Mar 2019 07:43:41 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 620C2239; Tue, 19 Mar 2019 07:43:36 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Mar 2019 23:43:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,496,1544515200"; d="scan'208";a="123847036" Received: from dpdk-tbie.sh.intel.com ([10.67.104.173]) by orsmga007.jf.intel.com with ESMTP; 18 Mar 2019 23:43:34 -0700 From: Tiwei Bie To: maxime.coquelin@redhat.com, zhihong.wang@intel.com, dev@dpdk.org Cc: stable@dpdk.org Date: Tue, 19 Mar 2019 14:43:03 +0800 Message-Id: <20190319064312.13743-2-tiwei.bie@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190319064312.13743-1-tiwei.bie@intel.com> References: <20190319064312.13743-1-tiwei.bie@intel.com> Subject: [dpdk-dev] [PATCH 01/10] net/virtio: fix typo in packed ring init X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The pointer to event structure should be cast to uintptr_t first. Fixes: f803734b0f2e ("net/virtio: vring init for packed queues") Cc: stable@dpdk.org Signed-off-by: Tiwei Bie Reviewed-by: Jens Freimann Reviewed-by: Maxime Coquelin --- drivers/net/virtio/virtio_ring.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h index 1760823c6..5a37629fe 100644 --- a/drivers/net/virtio/virtio_ring.h +++ b/drivers/net/virtio/virtio_ring.h @@ -165,7 +165,7 @@ vring_init_packed(struct vring_packed *vr, uint8_t *p, unsigned long align, vr->driver_event = (struct vring_packed_desc_event *)(p + vr->num * sizeof(struct vring_packed_desc)); vr->device_event = (struct vring_packed_desc_event *) - RTE_ALIGN_CEIL((uintptr_t)(vr->driver_event + + RTE_ALIGN_CEIL(((uintptr_t)vr->driver_event + sizeof(struct vring_packed_desc_event)), align); } From patchwork Tue Mar 19 06:43:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tiwei Bie X-Patchwork-Id: 51319 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 327A14CA0; Tue, 19 Mar 2019 07:43:45 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id A0714239; Tue, 19 Mar 2019 07:43:37 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Mar 2019 23:43:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,496,1544515200"; d="scan'208";a="123847042" Received: from dpdk-tbie.sh.intel.com ([10.67.104.173]) by orsmga007.jf.intel.com with ESMTP; 18 Mar 2019 23:43:35 -0700 From: Tiwei Bie To: maxime.coquelin@redhat.com, zhihong.wang@intel.com, dev@dpdk.org Cc: stable@dpdk.org Date: Tue, 19 Mar 2019 14:43:04 +0800 Message-Id: <20190319064312.13743-3-tiwei.bie@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190319064312.13743-1-tiwei.bie@intel.com> References: <20190319064312.13743-1-tiwei.bie@intel.com> Subject: [dpdk-dev] [PATCH 02/10] net/virtio: fix interrupt helper for packed ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When disabling interrupt, the shadow event flags should also be updated accordingly. The unnecessary wmb is also dropped. Fixes: e9f4feb7e622 ("net/virtio: add packed virtqueue helpers") Cc: stable@dpdk.org Signed-off-by: Tiwei Bie Reviewed-by: Maxime Coquelin --- drivers/net/virtio/virtqueue.h | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h index ca9d8e6e3..24fa873c3 100644 --- a/drivers/net/virtio/virtqueue.h +++ b/drivers/net/virtio/virtqueue.h @@ -321,12 +321,13 @@ vring_desc_init_split(struct vring_desc *dp, uint16_t n) static inline void virtqueue_disable_intr_packed(struct virtqueue *vq) { - uint16_t *event_flags = &vq->ring_packed.driver_event->desc_event_flags; - - *event_flags = RING_EVENT_FLAGS_DISABLE; + if (vq->event_flags_shadow != RING_EVENT_FLAGS_DISABLE) { + vq->event_flags_shadow = RING_EVENT_FLAGS_DISABLE; + vq->ring_packed.driver_event->desc_event_flags = + vq->event_flags_shadow; + } } - /** * Tell the backend not to interrupt us. */ @@ -348,7 +349,6 @@ virtqueue_enable_intr_packed(struct virtqueue *vq) uint16_t *event_flags = &vq->ring_packed.driver_event->desc_event_flags; if (vq->event_flags_shadow == RING_EVENT_FLAGS_DISABLE) { - virtio_wmb(vq->hw->weak_barriers); vq->event_flags_shadow = RING_EVENT_FLAGS_ENABLE; *event_flags = vq->event_flags_shadow; } From patchwork Tue Mar 19 06:43:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tiwei Bie X-Patchwork-Id: 51320 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B750E4CBD; Tue, 19 Mar 2019 07:43:48 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id F0167239; Tue, 19 Mar 2019 07:43:38 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Mar 2019 23:43:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,496,1544515200"; d="scan'208";a="123847047" Received: from dpdk-tbie.sh.intel.com ([10.67.104.173]) by orsmga007.jf.intel.com with ESMTP; 18 Mar 2019 23:43:37 -0700 From: Tiwei Bie To: maxime.coquelin@redhat.com, zhihong.wang@intel.com, dev@dpdk.org Cc: stable@dpdk.org Date: Tue, 19 Mar 2019 14:43:05 +0800 Message-Id: <20190319064312.13743-4-tiwei.bie@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190319064312.13743-1-tiwei.bie@intel.com> References: <20190319064312.13743-1-tiwei.bie@intel.com> Subject: [dpdk-dev] [PATCH 03/10] net/virtio: add missing barrier in interrupt enable X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Typically, after enabling Rx interrupt, a check should be done to make sure that there is no new incoming packets before going to sleep. So a barrier is needed to make sure that any following check won't happen before the interrupt is actually enabled. Fixes: c056be239db5 ("net/virtio: add Rx interrupt enable/disable functions") Cc: stable@dpdk.org Signed-off-by: Tiwei Bie Reviewed-by: Maxime Coquelin --- drivers/net/virtio/virtio_ethdev.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index 78ba7bd29..ff16fb63e 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -850,10 +850,12 @@ virtio_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) static int virtio_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) { + struct virtio_hw *hw = dev->data->dev_private; struct virtnet_rx *rxvq = dev->data->rx_queues[queue_id]; struct virtqueue *vq = rxvq->vq; virtqueue_enable_intr(vq); + virtio_mb(hw->weak_barriers); return 0; } From patchwork Tue Mar 19 06:43:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tiwei Bie X-Patchwork-Id: 51321 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 010BD4CC5; Tue, 19 Mar 2019 07:43:51 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id A032A2956 for ; Tue, 19 Mar 2019 07:43:39 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Mar 2019 23:43:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,496,1544515200"; d="scan'208";a="123847051" Received: from dpdk-tbie.sh.intel.com ([10.67.104.173]) by orsmga007.jf.intel.com with ESMTP; 18 Mar 2019 23:43:38 -0700 From: Tiwei Bie To: maxime.coquelin@redhat.com, zhihong.wang@intel.com, dev@dpdk.org Date: Tue, 19 Mar 2019 14:43:06 +0800 Message-Id: <20190319064312.13743-5-tiwei.bie@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190319064312.13743-1-tiwei.bie@intel.com> References: <20190319064312.13743-1-tiwei.bie@intel.com> Subject: [dpdk-dev] [PATCH 04/10] net/virtio: optimize flags update for packed ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Cache the AVAIL, USED and WRITE bits to avoid calculating them as much as possible. Note that, the WRITE bit isn't cached for control queue. Signed-off-by: Tiwei Bie Reviewed-by: Jens Freimann Reviewed-by: Maxime Coquelin --- drivers/net/virtio/virtio_ethdev.c | 35 ++++++++++++++---------------- drivers/net/virtio/virtio_rxtx.c | 31 ++++++++++---------------- drivers/net/virtio/virtqueue.h | 8 +++---- 3 files changed, 32 insertions(+), 42 deletions(-) diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index ff16fb63e..9060b6b33 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -149,7 +149,7 @@ virtio_send_command_packed(struct virtnet_ctl *cvq, int head; struct vring_packed_desc *desc = vq->ring_packed.desc_packed; struct virtio_pmd_ctrl *result; - bool avail_wrap_counter; + uint16_t flags; int sum = 0; int nb_descs = 0; int k; @@ -161,14 +161,15 @@ virtio_send_command_packed(struct virtnet_ctl *cvq, * One RX packet for ACK. */ head = vq->vq_avail_idx; - avail_wrap_counter = vq->avail_wrap_counter; + flags = vq->cached_flags; desc[head].addr = cvq->virtio_net_hdr_mem; desc[head].len = sizeof(struct virtio_net_ctrl_hdr); vq->vq_free_cnt--; nb_descs++; if (++vq->vq_avail_idx >= vq->vq_nentries) { vq->vq_avail_idx -= vq->vq_nentries; - vq->avail_wrap_counter ^= 1; + vq->cached_flags ^= + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1); } for (k = 0; k < pkt_num; k++) { @@ -177,34 +178,31 @@ virtio_send_command_packed(struct virtnet_ctl *cvq, + sizeof(ctrl->status) + sizeof(uint8_t) * sum; desc[vq->vq_avail_idx].len = dlen[k]; desc[vq->vq_avail_idx].flags = VRING_DESC_F_NEXT | - VRING_DESC_F_AVAIL(vq->avail_wrap_counter) | - VRING_DESC_F_USED(!vq->avail_wrap_counter); + vq->cached_flags; sum += dlen[k]; vq->vq_free_cnt--; nb_descs++; if (++vq->vq_avail_idx >= vq->vq_nentries) { vq->vq_avail_idx -= vq->vq_nentries; - vq->avail_wrap_counter ^= 1; + vq->cached_flags ^= + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1); } } desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem + sizeof(struct virtio_net_ctrl_hdr); desc[vq->vq_avail_idx].len = sizeof(ctrl->status); - desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE | - VRING_DESC_F_AVAIL(vq->avail_wrap_counter) | - VRING_DESC_F_USED(!vq->avail_wrap_counter); + desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE | vq->cached_flags; vq->vq_free_cnt--; nb_descs++; if (++vq->vq_avail_idx >= vq->vq_nentries) { vq->vq_avail_idx -= vq->vq_nentries; - vq->avail_wrap_counter ^= 1; + vq->cached_flags ^= + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1); } virtio_wmb(vq->hw->weak_barriers); - desc[head].flags = VRING_DESC_F_NEXT | - VRING_DESC_F_AVAIL(avail_wrap_counter) | - VRING_DESC_F_USED(!avail_wrap_counter); + desc[head].flags = VRING_DESC_F_NEXT | flags; virtio_wmb(vq->hw->weak_barriers); virtqueue_notify(vq); @@ -226,12 +224,12 @@ virtio_send_command_packed(struct virtnet_ctl *cvq, PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d\n" "vq->vq_avail_idx=%d\n" "vq->vq_used_cons_idx=%d\n" - "vq->avail_wrap_counter=%d\n" + "vq->cached_flags=0x%x\n" "vq->used_wrap_counter=%d\n", vq->vq_free_cnt, vq->vq_avail_idx, vq->vq_used_cons_idx, - vq->avail_wrap_counter, + vq->cached_flags, vq->used_wrap_counter); result = cvq->virtio_net_hdr_mz->addr; @@ -491,11 +489,10 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx) vq->vq_nentries = vq_size; vq->event_flags_shadow = 0; if (vtpci_packed_queue(hw)) { - vq->avail_wrap_counter = 1; vq->used_wrap_counter = 1; - vq->avail_used_flags = - VRING_DESC_F_AVAIL(vq->avail_wrap_counter) | - VRING_DESC_F_USED(!vq->avail_wrap_counter); + vq->cached_flags = VRING_DESC_F_AVAIL(1); + if (queue_type == VTNET_RQ) + vq->cached_flags |= VRING_DESC_F_WRITE; } /* diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 771d3c3f6..3c354baef 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -431,7 +431,7 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq, struct rte_mbuf **cookie, uint16_t num) { struct vring_packed_desc *start_dp = vq->ring_packed.desc_packed; - uint16_t flags = VRING_DESC_F_WRITE | vq->avail_used_flags; + uint16_t flags = vq->cached_flags; struct virtio_hw *hw = vq->hw; struct vq_desc_extra *dxp; uint16_t idx; @@ -460,11 +460,9 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq, start_dp[idx].flags = flags; if (++vq->vq_avail_idx >= vq->vq_nentries) { vq->vq_avail_idx -= vq->vq_nentries; - vq->avail_wrap_counter ^= 1; - vq->avail_used_flags = - VRING_DESC_F_AVAIL(vq->avail_wrap_counter) | - VRING_DESC_F_USED(!vq->avail_wrap_counter); - flags = VRING_DESC_F_WRITE | vq->avail_used_flags; + vq->cached_flags ^= + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1); + flags = vq->cached_flags; } } vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num); @@ -643,7 +641,7 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq, dxp->ndescs = 1; dxp->cookie = cookie; - flags = vq->avail_used_flags; + flags = vq->cached_flags; /* prepend cannot fail, checked by caller */ hdr = (struct virtio_net_hdr *) @@ -662,8 +660,7 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq, if (++vq->vq_avail_idx >= vq->vq_nentries) { vq->vq_avail_idx -= vq->vq_nentries; - vq->avail_wrap_counter ^= 1; - vq->avail_used_flags ^= + vq->cached_flags ^= VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1); } @@ -705,7 +702,7 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie, head_dp = &vq->ring_packed.desc_packed[idx]; head_flags = cookie->next ? VRING_DESC_F_NEXT : 0; - head_flags |= vq->avail_used_flags; + head_flags |= vq->cached_flags; if (can_push) { /* prepend cannot fail, checked by caller */ @@ -730,10 +727,8 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie, idx++; if (idx >= vq->vq_nentries) { idx -= vq->vq_nentries; - vq->avail_wrap_counter ^= 1; - vq->avail_used_flags = - VRING_DESC_F_AVAIL(vq->avail_wrap_counter) | - VRING_DESC_F_USED(!vq->avail_wrap_counter); + vq->cached_flags ^= + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1); } } @@ -746,17 +741,15 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie, start_dp[idx].len = cookie->data_len; if (likely(idx != head_idx)) { flags = cookie->next ? VRING_DESC_F_NEXT : 0; - flags |= vq->avail_used_flags; + flags |= vq->cached_flags; start_dp[idx].flags = flags; } prev = idx; idx++; if (idx >= vq->vq_nentries) { idx -= vq->vq_nentries; - vq->avail_wrap_counter ^= 1; - vq->avail_used_flags = - VRING_DESC_F_AVAIL(vq->avail_wrap_counter) | - VRING_DESC_F_USED(!vq->avail_wrap_counter); + vq->cached_flags ^= + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1); } } while ((cookie = cookie->next) != NULL); diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h index 24fa873c3..80c0c43c3 100644 --- a/drivers/net/virtio/virtqueue.h +++ b/drivers/net/virtio/virtqueue.h @@ -193,10 +193,10 @@ struct virtqueue { struct virtio_hw *hw; /**< virtio_hw structure pointer. */ struct vring vq_ring; /**< vring keeping desc, used and avail */ struct vring_packed ring_packed; /**< vring keeping descs */ - bool avail_wrap_counter; bool used_wrap_counter; + uint16_t cached_flags; /**< cached flags for descs */ uint16_t event_flags_shadow; - uint16_t avail_used_flags; + /** * Last consumed descriptor in the used table, * trails vq_ring.used->idx. @@ -478,9 +478,9 @@ virtqueue_notify(struct virtqueue *vq) if (vtpci_packed_queue((vq)->hw)) { \ PMD_INIT_LOG(DEBUG, \ "VQ: - size=%d; free=%d; used_cons_idx=%d; avail_idx=%d;" \ - "VQ: - avail_wrap_counter=%d; used_wrap_counter=%d", \ + " cached_flags=0x%x; used_wrap_counter=%d", \ (vq)->vq_nentries, (vq)->vq_free_cnt, (vq)->vq_used_cons_idx, \ - (vq)->vq_avail_idx, (vq)->avail_wrap_counter, \ + (vq)->vq_avail_idx, (vq)->cached_flags, \ (vq)->used_wrap_counter); \ break; \ } \ From patchwork Tue Mar 19 06:43:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tiwei Bie X-Patchwork-Id: 51322 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C774B4D3A; Tue, 19 Mar 2019 07:43:53 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 4FC512C54 for ; Tue, 19 Mar 2019 07:43:41 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Mar 2019 23:43:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,496,1544515200"; d="scan'208";a="123847055" Received: from dpdk-tbie.sh.intel.com ([10.67.104.173]) by orsmga007.jf.intel.com with ESMTP; 18 Mar 2019 23:43:39 -0700 From: Tiwei Bie To: maxime.coquelin@redhat.com, zhihong.wang@intel.com, dev@dpdk.org Date: Tue, 19 Mar 2019 14:43:07 +0800 Message-Id: <20190319064312.13743-6-tiwei.bie@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190319064312.13743-1-tiwei.bie@intel.com> References: <20190319064312.13743-1-tiwei.bie@intel.com> Subject: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Put split ring and packed ring specific fields into separate sub-structures, and also union them as they won't be available at the same time. Signed-off-by: Tiwei Bie Reviewed-by: Maxime Coquelin --- drivers/net/virtio/virtio_ethdev.c | 71 +++++++++--------- drivers/net/virtio/virtio_rxtx.c | 66 ++++++++--------- drivers/net/virtio/virtio_rxtx_simple.h | 2 +- drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +- drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +- drivers/net/virtio/virtqueue.c | 6 +- drivers/net/virtio/virtqueue.h | 77 +++++++++++--------- 7 files changed, 117 insertions(+), 109 deletions(-) diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index 9060b6b33..bc91ad493 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -147,7 +147,7 @@ virtio_send_command_packed(struct virtnet_ctl *cvq, { struct virtqueue *vq = cvq->vq; int head; - struct vring_packed_desc *desc = vq->ring_packed.desc_packed; + struct vring_packed_desc *desc = vq->vq_packed.ring.desc_packed; struct virtio_pmd_ctrl *result; uint16_t flags; int sum = 0; @@ -161,14 +161,14 @@ virtio_send_command_packed(struct virtnet_ctl *cvq, * One RX packet for ACK. */ head = vq->vq_avail_idx; - flags = vq->cached_flags; + flags = vq->vq_packed.cached_flags; desc[head].addr = cvq->virtio_net_hdr_mem; desc[head].len = sizeof(struct virtio_net_ctrl_hdr); vq->vq_free_cnt--; nb_descs++; if (++vq->vq_avail_idx >= vq->vq_nentries) { vq->vq_avail_idx -= vq->vq_nentries; - vq->cached_flags ^= + vq->vq_packed.cached_flags ^= VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1); } @@ -178,13 +178,13 @@ virtio_send_command_packed(struct virtnet_ctl *cvq, + sizeof(ctrl->status) + sizeof(uint8_t) * sum; desc[vq->vq_avail_idx].len = dlen[k]; desc[vq->vq_avail_idx].flags = VRING_DESC_F_NEXT | - vq->cached_flags; + vq->vq_packed.cached_flags; sum += dlen[k]; vq->vq_free_cnt--; nb_descs++; if (++vq->vq_avail_idx >= vq->vq_nentries) { vq->vq_avail_idx -= vq->vq_nentries; - vq->cached_flags ^= + vq->vq_packed.cached_flags ^= VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1); } } @@ -192,12 +192,13 @@ virtio_send_command_packed(struct virtnet_ctl *cvq, desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem + sizeof(struct virtio_net_ctrl_hdr); desc[vq->vq_avail_idx].len = sizeof(ctrl->status); - desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE | vq->cached_flags; + desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE | + vq->vq_packed.cached_flags; vq->vq_free_cnt--; nb_descs++; if (++vq->vq_avail_idx >= vq->vq_nentries) { vq->vq_avail_idx -= vq->vq_nentries; - vq->cached_flags ^= + vq->vq_packed.cached_flags ^= VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1); } @@ -218,19 +219,19 @@ virtio_send_command_packed(struct virtnet_ctl *cvq, vq->vq_used_cons_idx += nb_descs; if (vq->vq_used_cons_idx >= vq->vq_nentries) { vq->vq_used_cons_idx -= vq->vq_nentries; - vq->used_wrap_counter ^= 1; + vq->vq_packed.used_wrap_counter ^= 1; } PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d\n" "vq->vq_avail_idx=%d\n" "vq->vq_used_cons_idx=%d\n" - "vq->cached_flags=0x%x\n" - "vq->used_wrap_counter=%d\n", + "vq->vq_packed.cached_flags=0x%x\n" + "vq->vq_packed.used_wrap_counter=%d\n", vq->vq_free_cnt, vq->vq_avail_idx, vq->vq_used_cons_idx, - vq->cached_flags, - vq->used_wrap_counter); + vq->vq_packed.cached_flags, + vq->vq_packed.used_wrap_counter); result = cvq->virtio_net_hdr_mz->addr; return result; @@ -280,30 +281,30 @@ virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl, * At least one TX packet per argument; * One RX packet for ACK. */ - vq->vq_ring.desc[head].flags = VRING_DESC_F_NEXT; - vq->vq_ring.desc[head].addr = cvq->virtio_net_hdr_mem; - vq->vq_ring.desc[head].len = sizeof(struct virtio_net_ctrl_hdr); + vq->vq_split.ring.desc[head].flags = VRING_DESC_F_NEXT; + vq->vq_split.ring.desc[head].addr = cvq->virtio_net_hdr_mem; + vq->vq_split.ring.desc[head].len = sizeof(struct virtio_net_ctrl_hdr); vq->vq_free_cnt--; - i = vq->vq_ring.desc[head].next; + i = vq->vq_split.ring.desc[head].next; for (k = 0; k < pkt_num; k++) { - vq->vq_ring.desc[i].flags = VRING_DESC_F_NEXT; - vq->vq_ring.desc[i].addr = cvq->virtio_net_hdr_mem + vq->vq_split.ring.desc[i].flags = VRING_DESC_F_NEXT; + vq->vq_split.ring.desc[i].addr = cvq->virtio_net_hdr_mem + sizeof(struct virtio_net_ctrl_hdr) + sizeof(ctrl->status) + sizeof(uint8_t)*sum; - vq->vq_ring.desc[i].len = dlen[k]; + vq->vq_split.ring.desc[i].len = dlen[k]; sum += dlen[k]; vq->vq_free_cnt--; - i = vq->vq_ring.desc[i].next; + i = vq->vq_split.ring.desc[i].next; } - vq->vq_ring.desc[i].flags = VRING_DESC_F_WRITE; - vq->vq_ring.desc[i].addr = cvq->virtio_net_hdr_mem + vq->vq_split.ring.desc[i].flags = VRING_DESC_F_WRITE; + vq->vq_split.ring.desc[i].addr = cvq->virtio_net_hdr_mem + sizeof(struct virtio_net_ctrl_hdr); - vq->vq_ring.desc[i].len = sizeof(ctrl->status); + vq->vq_split.ring.desc[i].len = sizeof(ctrl->status); vq->vq_free_cnt--; - vq->vq_desc_head_idx = vq->vq_ring.desc[i].next; + vq->vq_desc_head_idx = vq->vq_split.ring.desc[i].next; vq_update_avail_ring(vq, head); vq_update_avail_idx(vq); @@ -324,16 +325,17 @@ virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl, used_idx = (uint32_t)(vq->vq_used_cons_idx & (vq->vq_nentries - 1)); - uep = &vq->vq_ring.used->ring[used_idx]; + uep = &vq->vq_split.ring.used->ring[used_idx]; idx = (uint32_t) uep->id; desc_idx = idx; - while (vq->vq_ring.desc[desc_idx].flags & VRING_DESC_F_NEXT) { - desc_idx = vq->vq_ring.desc[desc_idx].next; + while (vq->vq_split.ring.desc[desc_idx].flags & + VRING_DESC_F_NEXT) { + desc_idx = vq->vq_split.ring.desc[desc_idx].next; vq->vq_free_cnt++; } - vq->vq_ring.desc[desc_idx].next = vq->vq_desc_head_idx; + vq->vq_split.ring.desc[desc_idx].next = vq->vq_desc_head_idx; vq->vq_desc_head_idx = idx; vq->vq_used_cons_idx++; @@ -395,7 +397,6 @@ static void virtio_init_vring(struct virtqueue *vq) { int size = vq->vq_nentries; - struct vring *vr = &vq->vq_ring; uint8_t *ring_mem = vq->vq_ring_virt_mem; PMD_INIT_FUNC_TRACE(); @@ -409,10 +410,12 @@ virtio_init_vring(struct virtqueue *vq) vq->vq_free_cnt = vq->vq_nentries; memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries); if (vtpci_packed_queue(vq->hw)) { - vring_init_packed(&vq->ring_packed, ring_mem, + vring_init_packed(&vq->vq_packed.ring, ring_mem, VIRTIO_PCI_VRING_ALIGN, size); vring_desc_init_packed(vq, size); } else { + struct vring *vr = &vq->vq_split.ring; + vring_init_split(vr, ring_mem, VIRTIO_PCI_VRING_ALIGN, size); vring_desc_init_split(vr->desc, size); } @@ -487,12 +490,12 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx) vq->hw = hw; vq->vq_queue_index = vtpci_queue_idx; vq->vq_nentries = vq_size; - vq->event_flags_shadow = 0; if (vtpci_packed_queue(hw)) { - vq->used_wrap_counter = 1; - vq->cached_flags = VRING_DESC_F_AVAIL(1); + vq->vq_packed.used_wrap_counter = 1; + vq->vq_packed.cached_flags = VRING_DESC_F_AVAIL(1); + vq->vq_packed.event_flags_shadow = 0; if (queue_type == VTNET_RQ) - vq->cached_flags |= VRING_DESC_F_WRITE; + vq->vq_packed.cached_flags |= VRING_DESC_F_WRITE; } /* diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 3c354baef..02f8d9451 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -62,13 +62,13 @@ vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx) struct vq_desc_extra *dxp; uint16_t desc_idx_last = desc_idx; - dp = &vq->vq_ring.desc[desc_idx]; + dp = &vq->vq_split.ring.desc[desc_idx]; dxp = &vq->vq_descx[desc_idx]; vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt + dxp->ndescs); if ((dp->flags & VRING_DESC_F_INDIRECT) == 0) { while (dp->flags & VRING_DESC_F_NEXT) { desc_idx_last = dp->next; - dp = &vq->vq_ring.desc[dp->next]; + dp = &vq->vq_split.ring.desc[dp->next]; } } dxp->ndescs = 0; @@ -81,7 +81,7 @@ vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx) if (vq->vq_desc_tail_idx == VQ_RING_DESC_CHAIN_END) { vq->vq_desc_head_idx = desc_idx; } else { - dp_tail = &vq->vq_ring.desc[vq->vq_desc_tail_idx]; + dp_tail = &vq->vq_split.ring.desc[vq->vq_desc_tail_idx]; dp_tail->next = desc_idx; } @@ -118,7 +118,7 @@ virtqueue_dequeue_burst_rx_packed(struct virtqueue *vq, struct vring_packed_desc *desc; uint16_t i; - desc = vq->ring_packed.desc_packed; + desc = vq->vq_packed.ring.desc_packed; for (i = 0; i < num; i++) { used_idx = vq->vq_used_cons_idx; @@ -141,7 +141,7 @@ virtqueue_dequeue_burst_rx_packed(struct virtqueue *vq, vq->vq_used_cons_idx++; if (vq->vq_used_cons_idx >= vq->vq_nentries) { vq->vq_used_cons_idx -= vq->vq_nentries; - vq->used_wrap_counter ^= 1; + vq->vq_packed.used_wrap_counter ^= 1; } } @@ -160,7 +160,7 @@ virtqueue_dequeue_burst_rx(struct virtqueue *vq, struct rte_mbuf **rx_pkts, /* Caller does the check */ for (i = 0; i < num ; i++) { used_idx = (uint16_t)(vq->vq_used_cons_idx & (vq->vq_nentries - 1)); - uep = &vq->vq_ring.used->ring[used_idx]; + uep = &vq->vq_split.ring.used->ring[used_idx]; desc_idx = (uint16_t) uep->id; len[i] = uep->len; cookie = (struct rte_mbuf *)vq->vq_descx[desc_idx].cookie; @@ -199,7 +199,7 @@ virtqueue_dequeue_rx_inorder(struct virtqueue *vq, for (i = 0; i < num; i++) { used_idx = vq->vq_used_cons_idx & (vq->vq_nentries - 1); /* Desc idx same as used idx */ - uep = &vq->vq_ring.used->ring[used_idx]; + uep = &vq->vq_split.ring.used->ring[used_idx]; len[i] = uep->len; cookie = (struct rte_mbuf *)vq->vq_descx[used_idx].cookie; @@ -229,7 +229,7 @@ virtio_xmit_cleanup_inorder_packed(struct virtqueue *vq, int num) { uint16_t used_idx, id, curr_id, free_cnt = 0; uint16_t size = vq->vq_nentries; - struct vring_packed_desc *desc = vq->ring_packed.desc_packed; + struct vring_packed_desc *desc = vq->vq_packed.ring.desc_packed; struct vq_desc_extra *dxp; used_idx = vq->vq_used_cons_idx; @@ -244,7 +244,7 @@ virtio_xmit_cleanup_inorder_packed(struct virtqueue *vq, int num) num -= dxp->ndescs; if (used_idx >= size) { used_idx -= size; - vq->used_wrap_counter ^= 1; + vq->vq_packed.used_wrap_counter ^= 1; } if (dxp->cookie != NULL) { rte_pktmbuf_free(dxp->cookie); @@ -261,7 +261,7 @@ virtio_xmit_cleanup_normal_packed(struct virtqueue *vq, int num) { uint16_t used_idx, id; uint16_t size = vq->vq_nentries; - struct vring_packed_desc *desc = vq->ring_packed.desc_packed; + struct vring_packed_desc *desc = vq->vq_packed.ring.desc_packed; struct vq_desc_extra *dxp; used_idx = vq->vq_used_cons_idx; @@ -272,7 +272,7 @@ virtio_xmit_cleanup_normal_packed(struct virtqueue *vq, int num) vq->vq_used_cons_idx += dxp->ndescs; if (vq->vq_used_cons_idx >= size) { vq->vq_used_cons_idx -= size; - vq->used_wrap_counter ^= 1; + vq->vq_packed.used_wrap_counter ^= 1; } vq_ring_free_id_packed(vq, id); if (dxp->cookie != NULL) { @@ -302,7 +302,7 @@ virtio_xmit_cleanup(struct virtqueue *vq, uint16_t num) struct vq_desc_extra *dxp; used_idx = (uint16_t)(vq->vq_used_cons_idx & (vq->vq_nentries - 1)); - uep = &vq->vq_ring.used->ring[used_idx]; + uep = &vq->vq_split.ring.used->ring[used_idx]; desc_idx = (uint16_t) uep->id; dxp = &vq->vq_descx[desc_idx]; @@ -356,7 +356,7 @@ virtqueue_enqueue_refill_inorder(struct virtqueue *vq, return -EMSGSIZE; head_idx = vq->vq_desc_head_idx & (vq->vq_nentries - 1); - start_dp = vq->vq_ring.desc; + start_dp = vq->vq_split.ring.desc; while (i < num) { idx = head_idx & (vq->vq_nentries - 1); @@ -389,7 +389,7 @@ virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf **cookie, { struct vq_desc_extra *dxp; struct virtio_hw *hw = vq->hw; - struct vring_desc *start_dp = vq->vq_ring.desc; + struct vring_desc *start_dp = vq->vq_split.ring.desc; uint16_t idx, i; if (unlikely(vq->vq_free_cnt == 0)) @@ -430,8 +430,8 @@ static inline int virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq, struct rte_mbuf **cookie, uint16_t num) { - struct vring_packed_desc *start_dp = vq->ring_packed.desc_packed; - uint16_t flags = vq->cached_flags; + struct vring_packed_desc *start_dp = vq->vq_packed.ring.desc_packed; + uint16_t flags = vq->vq_packed.cached_flags; struct virtio_hw *hw = vq->hw; struct vq_desc_extra *dxp; uint16_t idx; @@ -460,9 +460,9 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq, start_dp[idx].flags = flags; if (++vq->vq_avail_idx >= vq->vq_nentries) { vq->vq_avail_idx -= vq->vq_nentries; - vq->cached_flags ^= + vq->vq_packed.cached_flags ^= VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1); - flags = vq->cached_flags; + flags = vq->vq_packed.cached_flags; } } vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num); @@ -589,7 +589,7 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq, uint16_t i = 0; idx = vq->vq_desc_head_idx; - start_dp = vq->vq_ring.desc; + start_dp = vq->vq_split.ring.desc; while (i < num) { idx = idx & (vq->vq_nentries - 1); @@ -635,13 +635,13 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq, id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx; idx = vq->vq_avail_idx; - dp = &vq->ring_packed.desc_packed[idx]; + dp = &vq->vq_packed.ring.desc_packed[idx]; dxp = &vq->vq_descx[id]; dxp->ndescs = 1; dxp->cookie = cookie; - flags = vq->cached_flags; + flags = vq->vq_packed.cached_flags; /* prepend cannot fail, checked by caller */ hdr = (struct virtio_net_hdr *) @@ -660,7 +660,7 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq, if (++vq->vq_avail_idx >= vq->vq_nentries) { vq->vq_avail_idx -= vq->vq_nentries; - vq->cached_flags ^= + vq->vq_packed.cached_flags ^= VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1); } @@ -698,11 +698,11 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie, head_idx = vq->vq_avail_idx; idx = head_idx; prev = head_idx; - start_dp = vq->ring_packed.desc_packed; + start_dp = vq->vq_packed.ring.desc_packed; - head_dp = &vq->ring_packed.desc_packed[idx]; + head_dp = &vq->vq_packed.ring.desc_packed[idx]; head_flags = cookie->next ? VRING_DESC_F_NEXT : 0; - head_flags |= vq->cached_flags; + head_flags |= vq->vq_packed.cached_flags; if (can_push) { /* prepend cannot fail, checked by caller */ @@ -727,7 +727,7 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie, idx++; if (idx >= vq->vq_nentries) { idx -= vq->vq_nentries; - vq->cached_flags ^= + vq->vq_packed.cached_flags ^= VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1); } } @@ -741,14 +741,14 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie, start_dp[idx].len = cookie->data_len; if (likely(idx != head_idx)) { flags = cookie->next ? VRING_DESC_F_NEXT : 0; - flags |= vq->cached_flags; + flags |= vq->vq_packed.cached_flags; start_dp[idx].flags = flags; } prev = idx; idx++; if (idx >= vq->vq_nentries) { idx -= vq->vq_nentries; - vq->cached_flags ^= + vq->vq_packed.cached_flags ^= VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1); } } while ((cookie = cookie->next) != NULL); @@ -791,7 +791,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie, dxp->cookie = (void *)cookie; dxp->ndescs = needed; - start_dp = vq->vq_ring.desc; + start_dp = vq->vq_split.ring.desc; if (can_push) { /* prepend cannot fail, checked by caller */ @@ -844,7 +844,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie, } while ((cookie = cookie->next) != NULL); if (use_indirect) - idx = vq->vq_ring.desc[head_idx].next; + idx = vq->vq_split.ring.desc[head_idx].next; vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed); @@ -919,8 +919,8 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx) if (hw->use_simple_rx) { for (desc_idx = 0; desc_idx < vq->vq_nentries; desc_idx++) { - vq->vq_ring.avail->ring[desc_idx] = desc_idx; - vq->vq_ring.desc[desc_idx].flags = + vq->vq_split.ring.avail->ring[desc_idx] = desc_idx; + vq->vq_split.ring.desc[desc_idx].flags = VRING_DESC_F_WRITE; } @@ -1050,7 +1050,7 @@ virtio_dev_tx_queue_setup_finish(struct rte_eth_dev *dev, if (!vtpci_packed_queue(hw)) { if (hw->use_inorder_tx) - vq->vq_ring.desc[vq->vq_nentries - 1].next = 0; + vq->vq_split.ring.desc[vq->vq_nentries - 1].next = 0; } VIRTQUEUE_DUMP(vq); diff --git a/drivers/net/virtio/virtio_rxtx_simple.h b/drivers/net/virtio/virtio_rxtx_simple.h index dc97e4ccf..3d1296a23 100644 --- a/drivers/net/virtio/virtio_rxtx_simple.h +++ b/drivers/net/virtio/virtio_rxtx_simple.h @@ -27,7 +27,7 @@ virtio_rxq_rearm_vec(struct virtnet_rx *rxvq) desc_idx = vq->vq_avail_idx & (vq->vq_nentries - 1); sw_ring = &vq->sw_ring[desc_idx]; - start_dp = &vq->vq_ring.desc[desc_idx]; + start_dp = &vq->vq_split.ring.desc[desc_idx]; ret = rte_mempool_get_bulk(rxvq->mpool, (void **)sw_ring, RTE_VIRTIO_VPMD_RX_REARM_THRESH); diff --git a/drivers/net/virtio/virtio_rxtx_simple_neon.c b/drivers/net/virtio/virtio_rxtx_simple_neon.c index d6207d7bb..cdc2a4d28 100644 --- a/drivers/net/virtio/virtio_rxtx_simple_neon.c +++ b/drivers/net/virtio/virtio_rxtx_simple_neon.c @@ -93,7 +93,7 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, nb_used = RTE_MIN(nb_used, nb_pkts); desc_idx = (uint16_t)(vq->vq_used_cons_idx & (vq->vq_nentries - 1)); - rused = &vq->vq_ring.used->ring[desc_idx]; + rused = &vq->vq_split.ring.used->ring[desc_idx]; sw_ring = &vq->sw_ring[desc_idx]; sw_ring_end = &vq->sw_ring[vq->vq_nentries]; diff --git a/drivers/net/virtio/virtio_rxtx_simple_sse.c b/drivers/net/virtio/virtio_rxtx_simple_sse.c index d768d0757..af76708d6 100644 --- a/drivers/net/virtio/virtio_rxtx_simple_sse.c +++ b/drivers/net/virtio/virtio_rxtx_simple_sse.c @@ -95,7 +95,7 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, nb_used = RTE_MIN(nb_used, nb_pkts); desc_idx = (uint16_t)(vq->vq_used_cons_idx & (vq->vq_nentries - 1)); - rused = &vq->vq_ring.used->ring[desc_idx]; + rused = &vq->vq_split.ring.used->ring[desc_idx]; sw_ring = &vq->sw_ring[desc_idx]; sw_ring_end = &vq->sw_ring[vq->vq_nentries]; diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c index 5b03f7a27..79491db32 100644 --- a/drivers/net/virtio/virtqueue.c +++ b/drivers/net/virtio/virtqueue.c @@ -61,7 +61,7 @@ virtqueue_rxvq_flush_packed(struct virtqueue *vq) struct vq_desc_extra *dxp; uint16_t i; - struct vring_packed_desc *descs = vq->ring_packed.desc_packed; + struct vring_packed_desc *descs = vq->vq_packed.ring.desc_packed; int cnt = 0; i = vq->vq_used_cons_idx; @@ -75,7 +75,7 @@ virtqueue_rxvq_flush_packed(struct virtqueue *vq) vq->vq_used_cons_idx++; if (vq->vq_used_cons_idx >= vq->vq_nentries) { vq->vq_used_cons_idx -= vq->vq_nentries; - vq->used_wrap_counter ^= 1; + vq->vq_packed.used_wrap_counter ^= 1; } i = vq->vq_used_cons_idx; } @@ -96,7 +96,7 @@ virtqueue_rxvq_flush_split(struct virtqueue *vq) for (i = 0; i < nb_used; i++) { used_idx = vq->vq_used_cons_idx & (vq->vq_nentries - 1); - uep = &vq->vq_ring.used->ring[used_idx]; + uep = &vq->vq_split.ring.used->ring[used_idx]; if (hw->use_simple_rx) { desc_idx = used_idx; rte_pktmbuf_free(vq->sw_ring[desc_idx]); diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h index 80c0c43c3..48b3912e6 100644 --- a/drivers/net/virtio/virtqueue.h +++ b/drivers/net/virtio/virtqueue.h @@ -191,17 +191,22 @@ struct vq_desc_extra { struct virtqueue { struct virtio_hw *hw; /**< virtio_hw structure pointer. */ - struct vring vq_ring; /**< vring keeping desc, used and avail */ - struct vring_packed ring_packed; /**< vring keeping descs */ - bool used_wrap_counter; - uint16_t cached_flags; /**< cached flags for descs */ - uint16_t event_flags_shadow; + union { + struct { + /**< vring keeping desc, used and avail */ + struct vring ring; + } vq_split; - /** - * Last consumed descriptor in the used table, - * trails vq_ring.used->idx. - */ - uint16_t vq_used_cons_idx; + struct { + /**< vring keeping descs and events */ + struct vring_packed ring; + bool used_wrap_counter; + uint16_t cached_flags; /**< cached flags for descs */ + uint16_t event_flags_shadow; + } vq_packed; + }; + + uint16_t vq_used_cons_idx; /**< last consumed descriptor */ uint16_t vq_nentries; /**< vring desc numbers */ uint16_t vq_free_cnt; /**< num of desc available */ uint16_t vq_avail_idx; /**< sync until needed */ @@ -289,7 +294,7 @@ desc_is_used(struct vring_packed_desc *desc, struct virtqueue *vq) used = !!(flags & VRING_DESC_F_USED(1)); avail = !!(flags & VRING_DESC_F_AVAIL(1)); - return avail == used && used == vq->used_wrap_counter; + return avail == used && used == vq->vq_packed.used_wrap_counter; } static inline void @@ -297,10 +302,10 @@ vring_desc_init_packed(struct virtqueue *vq, int n) { int i; for (i = 0; i < n - 1; i++) { - vq->ring_packed.desc_packed[i].id = i; + vq->vq_packed.ring.desc_packed[i].id = i; vq->vq_descx[i].next = i + 1; } - vq->ring_packed.desc_packed[i].id = i; + vq->vq_packed.ring.desc_packed[i].id = i; vq->vq_descx[i].next = VQ_RING_DESC_CHAIN_END; } @@ -321,10 +326,10 @@ vring_desc_init_split(struct vring_desc *dp, uint16_t n) static inline void virtqueue_disable_intr_packed(struct virtqueue *vq) { - if (vq->event_flags_shadow != RING_EVENT_FLAGS_DISABLE) { - vq->event_flags_shadow = RING_EVENT_FLAGS_DISABLE; - vq->ring_packed.driver_event->desc_event_flags = - vq->event_flags_shadow; + if (vq->vq_packed.event_flags_shadow != RING_EVENT_FLAGS_DISABLE) { + vq->vq_packed.event_flags_shadow = RING_EVENT_FLAGS_DISABLE; + vq->vq_packed.ring.driver_event->desc_event_flags = + vq->vq_packed.event_flags_shadow; } } @@ -337,7 +342,7 @@ virtqueue_disable_intr(struct virtqueue *vq) if (vtpci_packed_queue(vq->hw)) virtqueue_disable_intr_packed(vq); else - vq->vq_ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT; + vq->vq_split.ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT; } /** @@ -346,11 +351,10 @@ virtqueue_disable_intr(struct virtqueue *vq) static inline void virtqueue_enable_intr_packed(struct virtqueue *vq) { - uint16_t *event_flags = &vq->ring_packed.driver_event->desc_event_flags; - - if (vq->event_flags_shadow == RING_EVENT_FLAGS_DISABLE) { - vq->event_flags_shadow = RING_EVENT_FLAGS_ENABLE; - *event_flags = vq->event_flags_shadow; + if (vq->vq_packed.event_flags_shadow == RING_EVENT_FLAGS_DISABLE) { + vq->vq_packed.event_flags_shadow = RING_EVENT_FLAGS_ENABLE; + vq->vq_packed.ring.driver_event->desc_event_flags = + vq->vq_packed.event_flags_shadow; } } @@ -360,7 +364,7 @@ virtqueue_enable_intr_packed(struct virtqueue *vq) static inline void virtqueue_enable_intr_split(struct virtqueue *vq) { - vq->vq_ring.avail->flags &= (~VRING_AVAIL_F_NO_INTERRUPT); + vq->vq_split.ring.avail->flags &= (~VRING_AVAIL_F_NO_INTERRUPT); } /** @@ -404,7 +408,8 @@ virtio_get_queue_type(struct virtio_hw *hw, uint16_t vtpci_queue_idx) return VTNET_TQ; } -#define VIRTQUEUE_NUSED(vq) ((uint16_t)((vq)->vq_ring.used->idx - (vq)->vq_used_cons_idx)) +#define VIRTQUEUE_NUSED(vq) ((uint16_t)((vq)->vq_split.ring.used->idx - \ + (vq)->vq_used_cons_idx)) void vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx); void vq_ring_free_chain_packed(struct virtqueue *vq, uint16_t used_idx); @@ -415,7 +420,7 @@ static inline void vq_update_avail_idx(struct virtqueue *vq) { virtio_wmb(vq->hw->weak_barriers); - vq->vq_ring.avail->idx = vq->vq_avail_idx; + vq->vq_split.ring.avail->idx = vq->vq_avail_idx; } static inline void @@ -430,8 +435,8 @@ vq_update_avail_ring(struct virtqueue *vq, uint16_t desc_idx) * descriptor. */ avail_idx = (uint16_t)(vq->vq_avail_idx & (vq->vq_nentries - 1)); - if (unlikely(vq->vq_ring.avail->ring[avail_idx] != desc_idx)) - vq->vq_ring.avail->ring[avail_idx] = desc_idx; + if (unlikely(vq->vq_split.ring.avail->ring[avail_idx] != desc_idx)) + vq->vq_split.ring.avail->ring[avail_idx] = desc_idx; vq->vq_avail_idx++; } @@ -443,7 +448,7 @@ virtqueue_kick_prepare(struct virtqueue *vq) * the used->flags. */ virtio_mb(vq->hw->weak_barriers); - return !(vq->vq_ring.used->flags & VRING_USED_F_NO_NOTIFY); + return !(vq->vq_split.ring.used->flags & VRING_USED_F_NO_NOTIFY); } static inline int @@ -455,7 +460,7 @@ virtqueue_kick_prepare_packed(struct virtqueue *vq) * Ensure updated data is visible to vhost before reading the flags. */ virtio_mb(vq->hw->weak_barriers); - flags = vq->ring_packed.device_event->desc_event_flags; + flags = vq->vq_packed.ring.device_event->desc_event_flags; return flags != RING_EVENT_FLAGS_DISABLE; } @@ -473,15 +478,15 @@ virtqueue_notify(struct virtqueue *vq) #ifdef RTE_LIBRTE_VIRTIO_DEBUG_DUMP #define VIRTQUEUE_DUMP(vq) do { \ uint16_t used_idx, nused; \ - used_idx = (vq)->vq_ring.used->idx; \ + used_idx = (vq)->vq_split.ring.used->idx; \ nused = (uint16_t)(used_idx - (vq)->vq_used_cons_idx); \ if (vtpci_packed_queue((vq)->hw)) { \ PMD_INIT_LOG(DEBUG, \ "VQ: - size=%d; free=%d; used_cons_idx=%d; avail_idx=%d;" \ " cached_flags=0x%x; used_wrap_counter=%d", \ (vq)->vq_nentries, (vq)->vq_free_cnt, (vq)->vq_used_cons_idx, \ - (vq)->vq_avail_idx, (vq)->cached_flags, \ - (vq)->used_wrap_counter); \ + (vq)->vq_avail_idx, (vq)->vq_packed.cached_flags, \ + (vq)->vq_packed.used_wrap_counter); \ break; \ } \ PMD_INIT_LOG(DEBUG, \ @@ -489,9 +494,9 @@ virtqueue_notify(struct virtqueue *vq) " avail.idx=%d; used_cons_idx=%d; used.idx=%d;" \ " avail.flags=0x%x; used.flags=0x%x", \ (vq)->vq_nentries, (vq)->vq_free_cnt, nused, \ - (vq)->vq_desc_head_idx, (vq)->vq_ring.avail->idx, \ - (vq)->vq_used_cons_idx, (vq)->vq_ring.used->idx, \ - (vq)->vq_ring.avail->flags, (vq)->vq_ring.used->flags); \ + (vq)->vq_desc_head_idx, (vq)->vq_split.ring.avail->idx, \ + (vq)->vq_used_cons_idx, (vq)->vq_split.ring.used->idx, \ + (vq)->vq_split.ring.avail->flags, (vq)->vq_split.ring.used->flags); \ } while (0) #else #define VIRTQUEUE_DUMP(vq) do { } while (0) From patchwork Tue Mar 19 06:43:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tiwei Bie X-Patchwork-Id: 51323 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6D7E34F94; Tue, 19 Mar 2019 07:43:55 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 60D1A4C91 for ; Tue, 19 Mar 2019 07:43:42 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Mar 2019 23:43:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,496,1544515200"; d="scan'208";a="123847060" Received: from dpdk-tbie.sh.intel.com ([10.67.104.173]) by orsmga007.jf.intel.com with ESMTP; 18 Mar 2019 23:43:40 -0700 From: Tiwei Bie To: maxime.coquelin@redhat.com, zhihong.wang@intel.com, dev@dpdk.org Date: Tue, 19 Mar 2019 14:43:08 +0800 Message-Id: <20190319064312.13743-7-tiwei.bie@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190319064312.13743-1-tiwei.bie@intel.com> References: <20190319064312.13743-1-tiwei.bie@intel.com> Subject: [dpdk-dev] [PATCH 06/10] net/virtio: drop redundant suffix in packed ring structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Drop redundant suffix (_packed and _event) from the fields in packed ring structure. Signed-off-by: Tiwei Bie Reviewed-by: Jens Freimann Reviewed-by: Maxime Coquelin --- drivers/net/virtio/virtio_ethdev.c | 2 +- drivers/net/virtio/virtio_ring.h | 15 ++++++------- drivers/net/virtio/virtio_rxtx.c | 14 ++++++------ .../net/virtio/virtio_user/virtio_user_dev.c | 22 +++++++++---------- drivers/net/virtio/virtio_user_ethdev.c | 11 ++++------ drivers/net/virtio/virtqueue.c | 2 +- drivers/net/virtio/virtqueue.h | 10 ++++----- 7 files changed, 36 insertions(+), 40 deletions(-) diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index bc91ad493..f452a9a79 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -147,7 +147,7 @@ virtio_send_command_packed(struct virtnet_ctl *cvq, { struct virtqueue *vq = cvq->vq; int head; - struct vring_packed_desc *desc = vq->vq_packed.ring.desc_packed; + struct vring_packed_desc *desc = vq->vq_packed.ring.desc; struct virtio_pmd_ctrl *result; uint16_t flags; int sum = 0; diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h index 5a37629fe..6abec4d87 100644 --- a/drivers/net/virtio/virtio_ring.h +++ b/drivers/net/virtio/virtio_ring.h @@ -78,10 +78,9 @@ struct vring_packed_desc_event { struct vring_packed { unsigned int num; - struct vring_packed_desc *desc_packed; - struct vring_packed_desc_event *driver_event; - struct vring_packed_desc_event *device_event; - + struct vring_packed_desc *desc; + struct vring_packed_desc_event *driver; + struct vring_packed_desc_event *device; }; struct vring { @@ -161,11 +160,11 @@ vring_init_packed(struct vring_packed *vr, uint8_t *p, unsigned long align, unsigned int num) { vr->num = num; - vr->desc_packed = (struct vring_packed_desc *)p; - vr->driver_event = (struct vring_packed_desc_event *)(p + + vr->desc = (struct vring_packed_desc *)p; + vr->driver = (struct vring_packed_desc_event *)(p + vr->num * sizeof(struct vring_packed_desc)); - vr->device_event = (struct vring_packed_desc_event *) - RTE_ALIGN_CEIL(((uintptr_t)vr->driver_event + + vr->device = (struct vring_packed_desc_event *) + RTE_ALIGN_CEIL(((uintptr_t)vr->driver + sizeof(struct vring_packed_desc_event)), align); } diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 02f8d9451..42d0f533c 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -118,7 +118,7 @@ virtqueue_dequeue_burst_rx_packed(struct virtqueue *vq, struct vring_packed_desc *desc; uint16_t i; - desc = vq->vq_packed.ring.desc_packed; + desc = vq->vq_packed.ring.desc; for (i = 0; i < num; i++) { used_idx = vq->vq_used_cons_idx; @@ -229,7 +229,7 @@ virtio_xmit_cleanup_inorder_packed(struct virtqueue *vq, int num) { uint16_t used_idx, id, curr_id, free_cnt = 0; uint16_t size = vq->vq_nentries; - struct vring_packed_desc *desc = vq->vq_packed.ring.desc_packed; + struct vring_packed_desc *desc = vq->vq_packed.ring.desc; struct vq_desc_extra *dxp; used_idx = vq->vq_used_cons_idx; @@ -261,7 +261,7 @@ virtio_xmit_cleanup_normal_packed(struct virtqueue *vq, int num) { uint16_t used_idx, id; uint16_t size = vq->vq_nentries; - struct vring_packed_desc *desc = vq->vq_packed.ring.desc_packed; + struct vring_packed_desc *desc = vq->vq_packed.ring.desc; struct vq_desc_extra *dxp; used_idx = vq->vq_used_cons_idx; @@ -430,7 +430,7 @@ static inline int virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq, struct rte_mbuf **cookie, uint16_t num) { - struct vring_packed_desc *start_dp = vq->vq_packed.ring.desc_packed; + struct vring_packed_desc *start_dp = vq->vq_packed.ring.desc; uint16_t flags = vq->vq_packed.cached_flags; struct virtio_hw *hw = vq->hw; struct vq_desc_extra *dxp; @@ -635,7 +635,7 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq, id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx; idx = vq->vq_avail_idx; - dp = &vq->vq_packed.ring.desc_packed[idx]; + dp = &vq->vq_packed.ring.desc[idx]; dxp = &vq->vq_descx[id]; dxp->ndescs = 1; @@ -698,9 +698,9 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie, head_idx = vq->vq_avail_idx; idx = head_idx; prev = head_idx; - start_dp = vq->vq_packed.ring.desc_packed; + start_dp = vq->vq_packed.ring.desc; - head_dp = &vq->vq_packed.ring.desc_packed[idx]; + head_dp = &vq->vq_packed.ring.desc[idx]; head_flags = cookie->next ? VRING_DESC_F_NEXT : 0; head_flags |= vq->vq_packed.cached_flags; diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c index d1157378d..2dc8f2051 100644 --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c @@ -52,11 +52,11 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel) if (dev->features & (1ULL << VIRTIO_F_RING_PACKED)) { addr.desc_user_addr = - (uint64_t)(uintptr_t)pq_vring->desc_packed; + (uint64_t)(uintptr_t)pq_vring->desc; addr.avail_user_addr = - (uint64_t)(uintptr_t)pq_vring->driver_event; + (uint64_t)(uintptr_t)pq_vring->driver; addr.used_user_addr = - (uint64_t)(uintptr_t)pq_vring->device_event; + (uint64_t)(uintptr_t)pq_vring->device; } else { addr.desc_user_addr = (uint64_t)(uintptr_t)vring->desc; addr.avail_user_addr = (uint64_t)(uintptr_t)vring->avail; @@ -650,30 +650,30 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev, n_descs++; idx_status = idx_data; - while (vring->desc_packed[idx_status].flags & VRING_DESC_F_NEXT) { + while (vring->desc[idx_status].flags & VRING_DESC_F_NEXT) { idx_status++; if (idx_status >= dev->queue_size) idx_status -= dev->queue_size; n_descs++; } - hdr = (void *)(uintptr_t)vring->desc_packed[idx_hdr].addr; + hdr = (void *)(uintptr_t)vring->desc[idx_hdr].addr; if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) { uint16_t queues; queues = *(uint16_t *)(uintptr_t) - vring->desc_packed[idx_data].addr; + vring->desc[idx_data].addr; status = virtio_user_handle_mq(dev, queues); } /* Update status */ *(virtio_net_ctrl_ack *)(uintptr_t) - vring->desc_packed[idx_status].addr = status; + vring->desc[idx_status].addr = status; /* Update used descriptor */ - vring->desc_packed[idx_hdr].id = vring->desc_packed[idx_status].id; - vring->desc_packed[idx_hdr].len = sizeof(status); + vring->desc[idx_hdr].id = vring->desc[idx_status].id; + vring->desc[idx_hdr].len = sizeof(status); return n_descs; } @@ -685,14 +685,14 @@ virtio_user_handle_cq_packed(struct virtio_user_dev *dev, uint16_t queue_idx) struct vring_packed *vring = &dev->packed_vrings[queue_idx]; uint16_t n_descs; - while (desc_is_avail(&vring->desc_packed[vq->used_idx], + while (desc_is_avail(&vring->desc[vq->used_idx], vq->used_wrap_counter)) { n_descs = virtio_user_handle_ctrl_msg_packed(dev, vring, vq->used_idx); rte_smp_wmb(); - vring->desc_packed[vq->used_idx].flags = + vring->desc[vq->used_idx].flags = VRING_DESC_F_WRITE | VRING_DESC_F_AVAIL(vq->used_wrap_counter) | VRING_DESC_F_USED(vq->used_wrap_counter); diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c index 6423e1f61..c5a76bd91 100644 --- a/drivers/net/virtio/virtio_user_ethdev.c +++ b/drivers/net/virtio/virtio_user_ethdev.c @@ -290,17 +290,14 @@ virtio_user_setup_queue_packed(struct virtqueue *vq, sizeof(struct vring_packed_desc_event), VIRTIO_PCI_VRING_ALIGN); vring->num = vq->vq_nentries; - vring->desc_packed = - (void *)(uintptr_t)desc_addr; - vring->driver_event = - (void *)(uintptr_t)avail_addr; - vring->device_event = - (void *)(uintptr_t)used_addr; + vring->desc = (void *)(uintptr_t)desc_addr; + vring->driver = (void *)(uintptr_t)avail_addr; + vring->device = (void *)(uintptr_t)used_addr; dev->packed_queues[queue_idx].avail_wrap_counter = true; dev->packed_queues[queue_idx].used_wrap_counter = true; for (i = 0; i < vring->num; i++) - vring->desc_packed[i].flags = 0; + vring->desc[i].flags = 0; } static void diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c index 79491db32..5ff1e3587 100644 --- a/drivers/net/virtio/virtqueue.c +++ b/drivers/net/virtio/virtqueue.c @@ -61,7 +61,7 @@ virtqueue_rxvq_flush_packed(struct virtqueue *vq) struct vq_desc_extra *dxp; uint16_t i; - struct vring_packed_desc *descs = vq->vq_packed.ring.desc_packed; + struct vring_packed_desc *descs = vq->vq_packed.ring.desc; int cnt = 0; i = vq->vq_used_cons_idx; diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h index 48b3912e6..78df6d390 100644 --- a/drivers/net/virtio/virtqueue.h +++ b/drivers/net/virtio/virtqueue.h @@ -302,10 +302,10 @@ vring_desc_init_packed(struct virtqueue *vq, int n) { int i; for (i = 0; i < n - 1; i++) { - vq->vq_packed.ring.desc_packed[i].id = i; + vq->vq_packed.ring.desc[i].id = i; vq->vq_descx[i].next = i + 1; } - vq->vq_packed.ring.desc_packed[i].id = i; + vq->vq_packed.ring.desc[i].id = i; vq->vq_descx[i].next = VQ_RING_DESC_CHAIN_END; } @@ -328,7 +328,7 @@ virtqueue_disable_intr_packed(struct virtqueue *vq) { if (vq->vq_packed.event_flags_shadow != RING_EVENT_FLAGS_DISABLE) { vq->vq_packed.event_flags_shadow = RING_EVENT_FLAGS_DISABLE; - vq->vq_packed.ring.driver_event->desc_event_flags = + vq->vq_packed.ring.driver->desc_event_flags = vq->vq_packed.event_flags_shadow; } } @@ -353,7 +353,7 @@ virtqueue_enable_intr_packed(struct virtqueue *vq) { if (vq->vq_packed.event_flags_shadow == RING_EVENT_FLAGS_DISABLE) { vq->vq_packed.event_flags_shadow = RING_EVENT_FLAGS_ENABLE; - vq->vq_packed.ring.driver_event->desc_event_flags = + vq->vq_packed.ring.driver->desc_event_flags = vq->vq_packed.event_flags_shadow; } } @@ -460,7 +460,7 @@ virtqueue_kick_prepare_packed(struct virtqueue *vq) * Ensure updated data is visible to vhost before reading the flags. */ virtio_mb(vq->hw->weak_barriers); - flags = vq->vq_packed.ring.device_event->desc_event_flags; + flags = vq->vq_packed.ring.device->desc_event_flags; return flags != RING_EVENT_FLAGS_DISABLE; } From patchwork Tue Mar 19 06:43:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tiwei Bie X-Patchwork-Id: 51324 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8F74D54AE; Tue, 19 Mar 2019 07:43:57 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 02ED74C91 for ; Tue, 19 Mar 2019 07:43:42 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Mar 2019 23:43:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,496,1544515200"; d="scan'208";a="123847068" Received: from dpdk-tbie.sh.intel.com ([10.67.104.173]) by orsmga007.jf.intel.com with ESMTP; 18 Mar 2019 23:43:41 -0700 From: Tiwei Bie To: maxime.coquelin@redhat.com, zhihong.wang@intel.com, dev@dpdk.org Date: Tue, 19 Mar 2019 14:43:09 +0800 Message-Id: <20190319064312.13743-8-tiwei.bie@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190319064312.13743-1-tiwei.bie@intel.com> References: <20190319064312.13743-1-tiwei.bie@intel.com> Subject: [dpdk-dev] [PATCH 07/10] net/virtio: drop unused field in Tx region structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Drop the unused field tx_indir_pq from virtio_tx_region structure. Signed-off-by: Tiwei Bie Reviewed-by: Jens Freimann Reviewed-by: Maxime Coquelin --- drivers/net/virtio/virtio_ethdev.c | 10 +--------- drivers/net/virtio/virtqueue.h | 8 ++------ 2 files changed, 3 insertions(+), 15 deletions(-) diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index f452a9a79..8aa250997 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -603,17 +603,9 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx) memset(txr, 0, vq_size * sizeof(*txr)); for (i = 0; i < vq_size; i++) { struct vring_desc *start_dp = txr[i].tx_indir; - struct vring_packed_desc *start_dp_packed = - txr[i].tx_indir_pq; /* first indirect descriptor is always the tx header */ - if (vtpci_packed_queue(hw)) { - start_dp_packed->addr = txvq->virtio_net_hdr_mem - + i * sizeof(*txr) - + offsetof(struct virtio_tx_region, - tx_hdr); - start_dp_packed->len = hw->vtnet_hdr_size; - } else { + if (!vtpci_packed_queue(hw)) { vring_desc_init_split(start_dp, RTE_DIM(txr[i].tx_indir)); start_dp->addr = txvq->virtio_net_hdr_mem diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h index 78df6d390..6dab7db8e 100644 --- a/drivers/net/virtio/virtqueue.h +++ b/drivers/net/virtio/virtqueue.h @@ -277,12 +277,8 @@ struct virtio_net_hdr_mrg_rxbuf { #define VIRTIO_MAX_TX_INDIRECT 8 struct virtio_tx_region { struct virtio_net_hdr_mrg_rxbuf tx_hdr; - union { - struct vring_desc tx_indir[VIRTIO_MAX_TX_INDIRECT] - __attribute__((__aligned__(16))); - struct vring_packed_desc tx_indir_pq[VIRTIO_MAX_TX_INDIRECT] - __attribute__((__aligned__(16))); - }; + struct vring_desc tx_indir[VIRTIO_MAX_TX_INDIRECT] + __attribute__((__aligned__(16))); }; static inline int From patchwork Tue Mar 19 06:43:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tiwei Bie X-Patchwork-Id: 51325 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0855C5681; Tue, 19 Mar 2019 07:43:59 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 9EFDD4C93 for ; Tue, 19 Mar 2019 07:43:43 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Mar 2019 23:43:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,496,1544515200"; d="scan'208";a="123847072" Received: from dpdk-tbie.sh.intel.com ([10.67.104.173]) by orsmga007.jf.intel.com with ESMTP; 18 Mar 2019 23:43:42 -0700 From: Tiwei Bie To: maxime.coquelin@redhat.com, zhihong.wang@intel.com, dev@dpdk.org Date: Tue, 19 Mar 2019 14:43:10 +0800 Message-Id: <20190319064312.13743-9-tiwei.bie@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190319064312.13743-1-tiwei.bie@intel.com> References: <20190319064312.13743-1-tiwei.bie@intel.com> Subject: [dpdk-dev] [PATCH 08/10] net/virtio: add interrupt helper for split ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add a helper for disabling interrupts in split ring to make the code consistent with the corresponding code in packed ring. Signed-off-by: Tiwei Bie Reviewed-by: Jens Freimann Reviewed-by: Maxime Coquelin --- drivers/net/virtio/virtqueue.h | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h index 6dab7db8e..5cea7cb4a 100644 --- a/drivers/net/virtio/virtqueue.h +++ b/drivers/net/virtio/virtqueue.h @@ -317,7 +317,7 @@ vring_desc_init_split(struct vring_desc *dp, uint16_t n) } /** - * Tell the backend not to interrupt us. + * Tell the backend not to interrupt us. Implementation for packed virtqueues. */ static inline void virtqueue_disable_intr_packed(struct virtqueue *vq) @@ -329,6 +329,15 @@ virtqueue_disable_intr_packed(struct virtqueue *vq) } } +/** + * Tell the backend not to interrupt us. Implementation for split virtqueues. + */ +static inline void +virtqueue_disable_intr_split(struct virtqueue *vq) +{ + vq->vq_split.ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT; +} + /** * Tell the backend not to interrupt us. */ @@ -338,7 +347,7 @@ virtqueue_disable_intr(struct virtqueue *vq) if (vtpci_packed_queue(vq->hw)) virtqueue_disable_intr_packed(vq); else - vq->vq_split.ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT; + virtqueue_disable_intr_split(vq); } /** From patchwork Tue Mar 19 06:43:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tiwei Bie X-Patchwork-Id: 51326 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8891256A3; Tue, 19 Mar 2019 07:44:00 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 932DD4C9C for ; Tue, 19 Mar 2019 07:43:44 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Mar 2019 23:43:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,496,1544515200"; d="scan'208";a="123847075" Received: from dpdk-tbie.sh.intel.com ([10.67.104.173]) by orsmga007.jf.intel.com with ESMTP; 18 Mar 2019 23:43:43 -0700 From: Tiwei Bie To: maxime.coquelin@redhat.com, zhihong.wang@intel.com, dev@dpdk.org Date: Tue, 19 Mar 2019 14:43:11 +0800 Message-Id: <20190319064312.13743-10-tiwei.bie@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190319064312.13743-1-tiwei.bie@intel.com> References: <20190319064312.13743-1-tiwei.bie@intel.com> Subject: [dpdk-dev] [PATCH 09/10] net/virtio: add ctrl vq helper for split ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add a helper for sending commands in split ring to make the code consistent with the corresponding code in packed ring. Signed-off-by: Tiwei Bie Reviewed-by: Jens Freimann Reviewed-by: Maxime Coquelin --- drivers/net/virtio/virtio_ethdev.c | 76 +++++++++++++++++------------- 1 file changed, 43 insertions(+), 33 deletions(-) diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index 8aa250997..85b223451 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -237,44 +237,18 @@ virtio_send_command_packed(struct virtnet_ctl *cvq, return result; } -static int -virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl, - int *dlen, int pkt_num) +static struct virtio_pmd_ctrl * +virtio_send_command_split(struct virtnet_ctl *cvq, + struct virtio_pmd_ctrl *ctrl, + int *dlen, int pkt_num) { + struct virtio_pmd_ctrl *result; + struct virtqueue *vq = cvq->vq; uint32_t head, i; int k, sum = 0; - virtio_net_ctrl_ack status = ~0; - struct virtio_pmd_ctrl *result; - struct virtqueue *vq; - ctrl->status = status; - - if (!cvq || !cvq->vq) { - PMD_INIT_LOG(ERR, "Control queue is not supported."); - return -1; - } - - rte_spinlock_lock(&cvq->lock); - vq = cvq->vq; head = vq->vq_desc_head_idx; - PMD_INIT_LOG(DEBUG, "vq->vq_desc_head_idx = %d, status = %d, " - "vq->hw->cvq = %p vq = %p", - vq->vq_desc_head_idx, status, vq->hw->cvq, vq); - - if (vq->vq_free_cnt < pkt_num + 2 || pkt_num < 1) { - rte_spinlock_unlock(&cvq->lock); - return -1; - } - - memcpy(cvq->virtio_net_hdr_mz->addr, ctrl, - sizeof(struct virtio_pmd_ctrl)); - - if (vtpci_packed_queue(vq->hw)) { - result = virtio_send_command_packed(cvq, ctrl, dlen, pkt_num); - goto out_unlock; - } - /* * Format is enforced in qemu code: * One TX packet for header; @@ -346,8 +320,44 @@ virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl, vq->vq_free_cnt, vq->vq_desc_head_idx); result = cvq->virtio_net_hdr_mz->addr; + return result; +} + +static int +virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl, + int *dlen, int pkt_num) +{ + virtio_net_ctrl_ack status = ~0; + struct virtio_pmd_ctrl *result; + struct virtqueue *vq; + + ctrl->status = status; + + if (!cvq || !cvq->vq) { + PMD_INIT_LOG(ERR, "Control queue is not supported."); + return -1; + } + + rte_spinlock_lock(&cvq->lock); + vq = cvq->vq; + + PMD_INIT_LOG(DEBUG, "vq->vq_desc_head_idx = %d, status = %d, " + "vq->hw->cvq = %p vq = %p", + vq->vq_desc_head_idx, status, vq->hw->cvq, vq); + + if (vq->vq_free_cnt < pkt_num + 2 || pkt_num < 1) { + rte_spinlock_unlock(&cvq->lock); + return -1; + } + + memcpy(cvq->virtio_net_hdr_mz->addr, ctrl, + sizeof(struct virtio_pmd_ctrl)); + + if (vtpci_packed_queue(vq->hw)) + result = virtio_send_command_packed(cvq, ctrl, dlen, pkt_num); + else + result = virtio_send_command_split(cvq, ctrl, dlen, pkt_num); -out_unlock: rte_spinlock_unlock(&cvq->lock); return result->status; } From patchwork Tue Mar 19 06:43:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tiwei Bie X-Patchwork-Id: 51327 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5624058C6; Tue, 19 Mar 2019 07:44:02 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 869AF4C9C for ; Tue, 19 Mar 2019 07:43:45 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Mar 2019 23:43:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,496,1544515200"; d="scan'208";a="123847079" Received: from dpdk-tbie.sh.intel.com ([10.67.104.173]) by orsmga007.jf.intel.com with ESMTP; 18 Mar 2019 23:43:44 -0700 From: Tiwei Bie To: maxime.coquelin@redhat.com, zhihong.wang@intel.com, dev@dpdk.org Date: Tue, 19 Mar 2019 14:43:12 +0800 Message-Id: <20190319064312.13743-11-tiwei.bie@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190319064312.13743-1-tiwei.bie@intel.com> References: <20190319064312.13743-1-tiwei.bie@intel.com> Subject: [dpdk-dev] [PATCH 10/10] net/virtio: improve batching in standard Rx path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch improves descriptors refill by using the same batching strategy as done in in-order and mergeable path. Signed-off-by: Tiwei Bie Reviewed-by: Jens Freimann Reviewed-by: Maxime Coquelin --- drivers/net/virtio/virtio_rxtx.c | 60 ++++++++++++++++++-------------- 1 file changed, 34 insertions(+), 26 deletions(-) diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 42d0f533c..5f6796bdb 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -1211,7 +1211,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) struct virtnet_rx *rxvq = rx_queue; struct virtqueue *vq = rxvq->vq; struct virtio_hw *hw = vq->hw; - struct rte_mbuf *rxm, *new_mbuf; + struct rte_mbuf *rxm; uint16_t nb_used, num, nb_rx; uint32_t len[VIRTIO_MBUF_BURST_SZ]; struct rte_mbuf *rcv_pkts[VIRTIO_MBUF_BURST_SZ]; @@ -1281,20 +1281,24 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rxvq->stats.packets += nb_rx; /* Allocate new mbuf for the used descriptor */ - while (likely(!virtqueue_full(vq))) { - new_mbuf = rte_mbuf_raw_alloc(rxvq->mpool); - if (unlikely(new_mbuf == NULL)) { - struct rte_eth_dev *dev - = &rte_eth_devices[rxvq->port_id]; - dev->data->rx_mbuf_alloc_failed++; - break; + if (likely(!virtqueue_full(vq))) { + uint16_t free_cnt = vq->vq_free_cnt; + struct rte_mbuf *new_pkts[free_cnt]; + + if (likely(rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, + free_cnt) == 0)) { + error = virtqueue_enqueue_recv_refill(vq, new_pkts, + free_cnt); + if (unlikely(error)) { + for (i = 0; i < free_cnt; i++) + rte_pktmbuf_free(new_pkts[i]); + } + nb_enqueued += free_cnt; + } else { + struct rte_eth_dev *dev = + &rte_eth_devices[rxvq->port_id]; + dev->data->rx_mbuf_alloc_failed += free_cnt; } - error = virtqueue_enqueue_recv_refill(vq, &new_mbuf, 1); - if (unlikely(error)) { - rte_pktmbuf_free(new_mbuf); - break; - } - nb_enqueued++; } if (likely(nb_enqueued)) { @@ -1316,7 +1320,7 @@ virtio_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, struct virtnet_rx *rxvq = rx_queue; struct virtqueue *vq = rxvq->vq; struct virtio_hw *hw = vq->hw; - struct rte_mbuf *rxm, *new_mbuf; + struct rte_mbuf *rxm; uint16_t num, nb_rx; uint32_t len[VIRTIO_MBUF_BURST_SZ]; struct rte_mbuf *rcv_pkts[VIRTIO_MBUF_BURST_SZ]; @@ -1380,20 +1384,24 @@ virtio_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, rxvq->stats.packets += nb_rx; /* Allocate new mbuf for the used descriptor */ - while (likely(!virtqueue_full(vq))) { - new_mbuf = rte_mbuf_raw_alloc(rxvq->mpool); - if (unlikely(new_mbuf == NULL)) { + if (likely(!virtqueue_full(vq))) { + uint16_t free_cnt = vq->vq_free_cnt; + struct rte_mbuf *new_pkts[free_cnt]; + + if (likely(rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, + free_cnt) == 0)) { + error = virtqueue_enqueue_recv_refill_packed(vq, + new_pkts, free_cnt); + if (unlikely(error)) { + for (i = 0; i < free_cnt; i++) + rte_pktmbuf_free(new_pkts[i]); + } + nb_enqueued += free_cnt; + } else { struct rte_eth_dev *dev = &rte_eth_devices[rxvq->port_id]; - dev->data->rx_mbuf_alloc_failed++; - break; + dev->data->rx_mbuf_alloc_failed += free_cnt; } - error = virtqueue_enqueue_recv_refill_packed(vq, &new_mbuf, 1); - if (unlikely(error)) { - rte_pktmbuf_free(new_mbuf); - break; - } - nb_enqueued++; } if (likely(nb_enqueued)) {