Message ID | 1445399294-18826-5-git-send-email-yuanhan.liu@linux.intel.com (mailing list archive) |
---|---|
State | Changes Requested, archived |
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 5287991B5; Wed, 21 Oct 2015 05:48:12 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id B4AA2919D for <dev@dpdk.org>; Wed, 21 Oct 2015 05:48:06 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP; 20 Oct 2015 20:48:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,709,1437462000"; d="scan'208";a="831786541" Received: from yliu-dev.sh.intel.com ([10.239.66.49]) by fmsmga002.fm.intel.com with ESMTP; 20 Oct 2015 20:48:05 -0700 From: Yuanhan Liu <yuanhan.liu@linux.intel.com> To: dev@dpdk.org Date: Wed, 21 Oct 2015 11:48:10 +0800 Message-Id: <1445399294-18826-5-git-send-email-yuanhan.liu@linux.intel.com> X-Mailer: git-send-email 1.9.0 In-Reply-To: <1445399294-18826-1-git-send-email-yuanhan.liu@linux.intel.com> References: <1445399294-18826-1-git-send-email-yuanhan.liu@linux.intel.com> Cc: "Michael S. Tsirkin" <mst@redhat.com>, marcel@redhat.com, Changchun Ouyang <changchun.ouyang@intel.com> Subject: [dpdk-dev] [PATCH v7 4/8] vhost: rxtx: use queue id instead of constant ring index X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK <dev.dpdk.org> List-Unsubscribe: <http://dpdk.org/ml/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://dpdk.org/ml/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <http://dpdk.org/ml/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org Sender: "dev" <dev-bounces@dpdk.org> |
Commit Message
Yuanhan Liu
Oct. 21, 2015, 3:48 a.m. UTC
From: Changchun Ouyang <changchun.ouyang@intel.com> Do not use VIRTIO_RXQ or VIRTIO_TXQ anymore; use the queue_id instead, which will be set to a proper value for a specific queue when we have multiple queue support enabled. For now, queue_id is still set with VIRTIO_RXQ or VIRTIO_TXQ, so it should not break anything. Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com> Acked-by: Flavio Leitner <fbl@sysclose.org> --- v7: commit title fix --- lib/librte_vhost/vhost_rxtx.c | 46 ++++++++++++++++++++++++++++++------------- 1 file changed, 32 insertions(+), 14 deletions(-)
Comments
On Wed, 21 Oct 2015 11:48:10 +0800 Yuanhan Liu <yuanhan.liu@linux.intel.com> wrote: > > +static inline int __attribute__((always_inline)) > +is_valid_virt_queue_idx(uint32_t virtq_idx, int is_tx, uint32_t max_qp_idx) > +{ > + if ((is_tx ^ (virtq_idx & 0x1)) || > + (virtq_idx >= max_qp_idx * VIRTIO_QNUM)) > + return 0; > + > + return 1; > +} minor nits: * this doesn't need to be marked as always inline, that is as they say in English "shooting a fly with a bazooka" * prefer to just return logical result rather than have conditional: * for booleans prefer the <stdbool.h> type boolean. static bool is_valid_virt_queue_idx(uint32_t virtq_idx, bool is_tx, uint32_t max_qp_idx) { return (is_tx ^ (virtq_idx & 1)) || virtq_idx >= max_qp_idx * VIRTIO_QNUM; }
On 10/21/2015 12:44 PM, Stephen Hemminger wrote: > On Wed, 21 Oct 2015 11:48:10 +0800 > Yuanhan Liu <yuanhan.liu@linux.intel.com> wrote: > >> >> +static inline int __attribute__((always_inline)) >> +is_valid_virt_queue_idx(uint32_t virtq_idx, int is_tx, uint32_t max_qp_idx) >> +{ >> + if ((is_tx ^ (virtq_idx & 0x1)) || >> + (virtq_idx >= max_qp_idx * VIRTIO_QNUM)) >> + return 0; >> + >> + return 1; >> +} > minor nits: > * this doesn't need to be marked as always inline, > that is as they say in English "shooting a fly with a bazooka" Stephen: always_inline "forces" the compiler to inline this function, like a macro. When should it be used or is it not preferred at all? > * prefer to just return logical result rather than have conditional: > * for booleans prefer the <stdbool.h> type boolean. > > static bool > is_valid_virt_queue_idx(uint32_t virtq_idx, bool is_tx, uint32_t max_qp_idx) > { > return (is_tx ^ (virtq_idx & 1)) || > virtq_idx >= max_qp_idx * VIRTIO_QNUM; > } >
> -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Xie, Huawei > Sent: Wednesday, October 21, 2015 8:16 AM > To: Stephen Hemminger; Yuanhan Liu > Cc: dev@dpdk.org; marcel@redhat.com; Michael S. Tsirkin; Changchun Ouyang > Subject: Re: [dpdk-dev] [PATCH v7 4/8] vhost: rxtx: use queue id instead of constant ring index > > On 10/21/2015 12:44 PM, Stephen Hemminger wrote: > > On Wed, 21 Oct 2015 11:48:10 +0800 > > Yuanhan Liu <yuanhan.liu@linux.intel.com> wrote: > > > >> > >> +static inline int __attribute__((always_inline)) > >> +is_valid_virt_queue_idx(uint32_t virtq_idx, int is_tx, uint32_t max_qp_idx) > >> +{ > >> + if ((is_tx ^ (virtq_idx & 0x1)) || > >> + (virtq_idx >= max_qp_idx * VIRTIO_QNUM)) > >> + return 0; > >> + > >> + return 1; > >> +} > > minor nits: > > * this doesn't need to be marked as always inline, > > that is as they say in English "shooting a fly with a bazooka" > Stephen: > always_inline "forces" the compiler to inline this function, like a macro. > When should it be used or is it not preferred at all? I also don't understand what's wrong with using 'always_inline' here. As I understand the author wants compiler to *always inline* that function. So seems perfectly ok to use it here. As I remember just 'inline' is sort of recommendation that compiler is free to ignore. Konstantin > > > * prefer to just return logical result rather than have conditional: > > * for booleans prefer the <stdbool.h> type boolean. > > > > static bool > > is_valid_virt_queue_idx(uint32_t virtq_idx, bool is_tx, uint32_t max_qp_idx) > > { > > return (is_tx ^ (virtq_idx & 1)) || > > virtq_idx >= max_qp_idx * VIRTIO_QNUM; > > } > >
On Wed, Oct 21, 2015 at 11:48:10AM +0800, Yuanhan Liu wrote: > From: Changchun Ouyang <changchun.ouyang@intel.com> > > Do not use VIRTIO_RXQ or VIRTIO_TXQ anymore; use the queue_id > instead, which will be set to a proper value for a specific queue > when we have multiple queue support enabled. > > For now, queue_id is still set with VIRTIO_RXQ or VIRTIO_TXQ, > so it should not break anything. > > Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com> > Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com> > Acked-by: Flavio Leitner <fbl@sysclose.org> I tried to figure out how is queue_id set and I couldn't. Please note that for virtio devices, guest is supposed to control the placement of incoming packets in RX queues. > --- > > v7: commit title fix > --- > lib/librte_vhost/vhost_rxtx.c | 46 ++++++++++++++++++++++++++++++------------- > 1 file changed, 32 insertions(+), 14 deletions(-) > > diff --git a/lib/librte_vhost/vhost_rxtx.c b/lib/librte_vhost/vhost_rxtx.c > index 7026bfa..14e00ef 100644 > --- a/lib/librte_vhost/vhost_rxtx.c > +++ b/lib/librte_vhost/vhost_rxtx.c > @@ -42,6 +42,16 @@ > > #define MAX_PKT_BURST 32 > > +static inline int __attribute__((always_inline)) > +is_valid_virt_queue_idx(uint32_t virtq_idx, int is_tx, uint32_t max_qp_idx) > +{ > + if ((is_tx ^ (virtq_idx & 0x1)) || > + (virtq_idx >= max_qp_idx * VIRTIO_QNUM)) > + return 0; > + > + return 1; > +} > + > /** > * This function adds buffers to the virtio devices RX virtqueue. Buffers can > * be received from the physical port or from another virtio device. A packet > @@ -68,12 +78,14 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, > uint8_t success = 0; > > LOG_DEBUG(VHOST_DATA, "(%"PRIu64") virtio_dev_rx()\n", dev->device_fh); > - if (unlikely(queue_id != VIRTIO_RXQ)) { > - LOG_DEBUG(VHOST_DATA, "mq isn't supported in this version.\n"); > + if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->virt_qp_nb))) { > + RTE_LOG(ERR, VHOST_DATA, > + "%s (%"PRIu64"): virtqueue idx:%d invalid.\n", > + __func__, dev->device_fh, queue_id); > return 0; > } > > - vq = dev->virtqueue[VIRTIO_RXQ]; > + vq = dev->virtqueue[queue_id]; > count = (count > MAX_PKT_BURST) ? MAX_PKT_BURST : count; > > /* > @@ -235,8 +247,9 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, > } > > static inline uint32_t __attribute__((always_inline)) > -copy_from_mbuf_to_vring(struct virtio_net *dev, uint16_t res_base_idx, > - uint16_t res_end_idx, struct rte_mbuf *pkt) > +copy_from_mbuf_to_vring(struct virtio_net *dev, uint32_t queue_id, > + uint16_t res_base_idx, uint16_t res_end_idx, > + struct rte_mbuf *pkt) > { > uint32_t vec_idx = 0; > uint32_t entry_success = 0; > @@ -264,7 +277,7 @@ copy_from_mbuf_to_vring(struct virtio_net *dev, uint16_t res_base_idx, > * Convert from gpa to vva > * (guest physical addr -> vhost virtual addr) > */ > - vq = dev->virtqueue[VIRTIO_RXQ]; > + vq = dev->virtqueue[queue_id]; > vb_addr = gpa_to_vva(dev, vq->buf_vec[vec_idx].buf_addr); > vb_hdr_addr = vb_addr; > > @@ -464,11 +477,14 @@ virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id, > > LOG_DEBUG(VHOST_DATA, "(%"PRIu64") virtio_dev_merge_rx()\n", > dev->device_fh); > - if (unlikely(queue_id != VIRTIO_RXQ)) { > - LOG_DEBUG(VHOST_DATA, "mq isn't supported in this version.\n"); > + if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->virt_qp_nb))) { > + RTE_LOG(ERR, VHOST_DATA, > + "%s (%"PRIu64"): virtqueue idx:%d invalid.\n", > + __func__, dev->device_fh, queue_id); > + return 0; > } > > - vq = dev->virtqueue[VIRTIO_RXQ]; > + vq = dev->virtqueue[queue_id]; > count = RTE_MIN((uint32_t)MAX_PKT_BURST, count); > > if (count == 0) > @@ -509,8 +525,8 @@ virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id, > res_cur_idx); > } while (success == 0); > > - entry_success = copy_from_mbuf_to_vring(dev, res_base_idx, > - res_cur_idx, pkts[pkt_idx]); > + entry_success = copy_from_mbuf_to_vring(dev, queue_id, > + res_base_idx, res_cur_idx, pkts[pkt_idx]); > > rte_compiler_barrier(); > > @@ -562,12 +578,14 @@ rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id, > uint16_t free_entries, entry_success = 0; > uint16_t avail_idx; > > - if (unlikely(queue_id != VIRTIO_TXQ)) { > - LOG_DEBUG(VHOST_DATA, "mq isn't supported in this version.\n"); > + if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->virt_qp_nb))) { > + RTE_LOG(ERR, VHOST_DATA, > + "%s (%"PRIu64"): virtqueue idx:%d invalid.\n", > + __func__, dev->device_fh, queue_id); > return 0; > } > > - vq = dev->virtqueue[VIRTIO_TXQ]; > + vq = dev->virtqueue[queue_id]; > avail_idx = *((volatile uint16_t *)&vq->avail->idx); > > /* If there are no available buffers then return. */ > -- > 1.9.0
On Wed, Oct 21, 2015 at 01:31:55PM +0300, Michael S. Tsirkin wrote: > On Wed, Oct 21, 2015 at 11:48:10AM +0800, Yuanhan Liu wrote: > > From: Changchun Ouyang <changchun.ouyang@intel.com> > > > > Do not use VIRTIO_RXQ or VIRTIO_TXQ anymore; use the queue_id > > instead, which will be set to a proper value for a specific queue > > when we have multiple queue support enabled. > > > > For now, queue_id is still set with VIRTIO_RXQ or VIRTIO_TXQ, > > so it should not break anything. > > > > Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com> > > Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com> > > Acked-by: Flavio Leitner <fbl@sysclose.org> > > I tried to figure out how is queue_id set and I couldn't. queue_id is set outside the DPDK library, it's up to application to select a queue. There was a demo (examples/vhost/vhost-switch) before, and it was removed. (check the cover letter for the reason). > Please note that for virtio devices, guest is supposed to > control the placement of incoming packets in RX queues. I may not follow you. Enqueuing packets to a RX queue is done at vhost lib, outside the guest, how could the guest take the control here? --yliu > > --- > > > > v7: commit title fix > > --- > > lib/librte_vhost/vhost_rxtx.c | 46 ++++++++++++++++++++++++++++++------------- > > 1 file changed, 32 insertions(+), 14 deletions(-) > > > > diff --git a/lib/librte_vhost/vhost_rxtx.c b/lib/librte_vhost/vhost_rxtx.c > > index 7026bfa..14e00ef 100644 > > --- a/lib/librte_vhost/vhost_rxtx.c > > +++ b/lib/librte_vhost/vhost_rxtx.c > > @@ -42,6 +42,16 @@ > > > > #define MAX_PKT_BURST 32 > > > > +static inline int __attribute__((always_inline)) > > +is_valid_virt_queue_idx(uint32_t virtq_idx, int is_tx, uint32_t max_qp_idx) > > +{ > > + if ((is_tx ^ (virtq_idx & 0x1)) || > > + (virtq_idx >= max_qp_idx * VIRTIO_QNUM)) > > + return 0; > > + > > + return 1; > > +} > > + > > /** > > * This function adds buffers to the virtio devices RX virtqueue. Buffers can > > * be received from the physical port or from another virtio device. A packet > > @@ -68,12 +78,14 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, > > uint8_t success = 0; > > > > LOG_DEBUG(VHOST_DATA, "(%"PRIu64") virtio_dev_rx()\n", dev->device_fh); > > - if (unlikely(queue_id != VIRTIO_RXQ)) { > > - LOG_DEBUG(VHOST_DATA, "mq isn't supported in this version.\n"); > > + if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->virt_qp_nb))) { > > + RTE_LOG(ERR, VHOST_DATA, > > + "%s (%"PRIu64"): virtqueue idx:%d invalid.\n", > > + __func__, dev->device_fh, queue_id); > > return 0; > > } > > > > - vq = dev->virtqueue[VIRTIO_RXQ]; > > + vq = dev->virtqueue[queue_id]; > > count = (count > MAX_PKT_BURST) ? MAX_PKT_BURST : count; > > > > /* > > @@ -235,8 +247,9 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, > > } > > > > static inline uint32_t __attribute__((always_inline)) > > -copy_from_mbuf_to_vring(struct virtio_net *dev, uint16_t res_base_idx, > > - uint16_t res_end_idx, struct rte_mbuf *pkt) > > +copy_from_mbuf_to_vring(struct virtio_net *dev, uint32_t queue_id, > > + uint16_t res_base_idx, uint16_t res_end_idx, > > + struct rte_mbuf *pkt) > > { > > uint32_t vec_idx = 0; > > uint32_t entry_success = 0; > > @@ -264,7 +277,7 @@ copy_from_mbuf_to_vring(struct virtio_net *dev, uint16_t res_base_idx, > > * Convert from gpa to vva > > * (guest physical addr -> vhost virtual addr) > > */ > > - vq = dev->virtqueue[VIRTIO_RXQ]; > > + vq = dev->virtqueue[queue_id]; > > vb_addr = gpa_to_vva(dev, vq->buf_vec[vec_idx].buf_addr); > > vb_hdr_addr = vb_addr; > > > > @@ -464,11 +477,14 @@ virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id, > > > > LOG_DEBUG(VHOST_DATA, "(%"PRIu64") virtio_dev_merge_rx()\n", > > dev->device_fh); > > - if (unlikely(queue_id != VIRTIO_RXQ)) { > > - LOG_DEBUG(VHOST_DATA, "mq isn't supported in this version.\n"); > > + if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->virt_qp_nb))) { > > + RTE_LOG(ERR, VHOST_DATA, > > + "%s (%"PRIu64"): virtqueue idx:%d invalid.\n", > > + __func__, dev->device_fh, queue_id); > > + return 0; > > } > > > > - vq = dev->virtqueue[VIRTIO_RXQ]; > > + vq = dev->virtqueue[queue_id]; > > count = RTE_MIN((uint32_t)MAX_PKT_BURST, count); > > > > if (count == 0) > > @@ -509,8 +525,8 @@ virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id, > > res_cur_idx); > > } while (success == 0); > > > > - entry_success = copy_from_mbuf_to_vring(dev, res_base_idx, > > - res_cur_idx, pkts[pkt_idx]); > > + entry_success = copy_from_mbuf_to_vring(dev, queue_id, > > + res_base_idx, res_cur_idx, pkts[pkt_idx]); > > > > rte_compiler_barrier(); > > > > @@ -562,12 +578,14 @@ rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id, > > uint16_t free_entries, entry_success = 0; > > uint16_t avail_idx; > > > > - if (unlikely(queue_id != VIRTIO_TXQ)) { > > - LOG_DEBUG(VHOST_DATA, "mq isn't supported in this version.\n"); > > + if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->virt_qp_nb))) { > > + RTE_LOG(ERR, VHOST_DATA, > > + "%s (%"PRIu64"): virtqueue idx:%d invalid.\n", > > + __func__, dev->device_fh, queue_id); > > return 0; > > } > > > > - vq = dev->virtqueue[VIRTIO_TXQ]; > > + vq = dev->virtqueue[queue_id]; > > avail_idx = *((volatile uint16_t *)&vq->avail->idx); > > > > /* If there are no available buffers then return. */ > > -- > > 1.9.0
On Wed, Oct 21, 2015 at 08:48:15PM +0800, Yuanhan Liu wrote: > > Please note that for virtio devices, guest is supposed to > > control the placement of incoming packets in RX queues. > > I may not follow you. > > Enqueuing packets to a RX queue is done at vhost lib, outside the > guest, how could the guest take the control here? > > --yliu vhost should do what guest told it to. See virtio spec: 5.1.6.5.5 Automatic receive steering in multiqueue mode
On Wed, Oct 21, 2015 at 05:26:18PM +0300, Michael S. Tsirkin wrote: > On Wed, Oct 21, 2015 at 08:48:15PM +0800, Yuanhan Liu wrote: > > > Please note that for virtio devices, guest is supposed to > > > control the placement of incoming packets in RX queues. > > > > I may not follow you. > > > > Enqueuing packets to a RX queue is done at vhost lib, outside the > > guest, how could the guest take the control here? > > > > --yliu > > vhost should do what guest told it to. > > See virtio spec: > 5.1.6.5.5 Automatic receive steering in multiqueue mode Thanks for the info. I'll have a look tomorrow. --yliu
On Wed, 21 Oct 2015 09:38:37 +0000 "Ananyev, Konstantin" <konstantin.ananyev@intel.com> wrote: > > > minor nits: > > > * this doesn't need to be marked as always inline, > > > that is as they say in English "shooting a fly with a bazooka" > > Stephen: > > always_inline "forces" the compiler to inline this function, like a macro. > > When should it be used or is it not preferred at all? > > I also don't understand what's wrong with using 'always_inline' here. > As I understand the author wants compiler to *always inline* that function. > So seems perfectly ok to use it here. > As I remember just 'inline' is sort of recommendation that compiler is free to ignore. > Konstantin I follow Linux/Linus advice and resist the urge to add strong inlining. The compiler does a good job of deciding to inline, and many times the reason it chooses for not inlining are quite good like: - the code is on an unlikely branch - register pressure means inlining would mean the code would be worse Therefore my rules are: * only use inline for small functions. Let compiler decide on larger static funcs * write code where most functions are static (localized scope) where compiler can decide * reserve always inline for things that access hardware and would break if not inlined.
2015-10-21 08:47, Stephen Hemminger: > On Wed, 21 Oct 2015 09:38:37 +0000 > "Ananyev, Konstantin" <konstantin.ananyev@intel.com> wrote: > > I also don't understand what's wrong with using 'always_inline' here. > > As I understand the author wants compiler to *always inline* that function. > > So seems perfectly ok to use it here. > > As I remember just 'inline' is sort of recommendation that compiler is free to ignore. > > Konstantin > > I follow Linux/Linus advice and resist the urge to add strong inlining. > The compiler does a good job of deciding to inline, and many times > the reason it chooses for not inlining are quite good like: > - the code is on an unlikely branch > - register pressure means inlining would mean the code would be worse > > Therefore my rules are: > * only use inline for small functions. Let compiler decide on larger static funcs > * write code where most functions are static (localized scope) where compiler > can decide > * reserve always inline for things that access hardware and would break if not inlined. It would be interesting to do some benchmarks with/without "always" keyword and add these rules in the coding style guide.
On 21/10/2015 16:47, Stephen Hemminger wrote: > On Wed, 21 Oct 2015 09:38:37 +0000 > "Ananyev, Konstantin" <konstantin.ananyev@intel.com> wrote: > >>>> minor nits: >>>> * this doesn't need to be marked as always inline, >>>> that is as they say in English "shooting a fly with a bazooka" >>> Stephen: >>> always_inline "forces" the compiler to inline this function, like a macro. >>> When should it be used or is it not preferred at all? >> I also don't understand what's wrong with using 'always_inline' here. >> As I understand the author wants compiler to *always inline* that function. >> So seems perfectly ok to use it here. >> As I remember just 'inline' is sort of recommendation that compiler is free to ignore. >> Konstantin > I follow Linux/Linus advice and resist the urge to add strong inlining. > The compiler does a good job of deciding to inline, and many times > the reason it chooses for not inlining are quite good like: > - the code is on an unlikely branch > - register pressure means inlining would mean the code would be worse > > Therefore my rules are: > * only use inline for small functions. Let compiler decide on larger static funcs > * write code where most functions are static (localized scope) where compiler > can decide > * reserve always inline for things that access hardware and would break if not inlined. > On the other hand, there are cases where we know the compiler will likely inline, but we also know that not inlining could have a high performance penalty, and in that case marking as "always inline" would be appropriate - even though it is likely unnecessary for most compilers. In such a case, I would expect the verification check to be: explicitly mark the function as *not* to be inlined, and see what the perf drop is. If it's a noticable drop, marking as always-inline is an ok precaution against future compiler changes. Also, we need to remember that compilers cannot know whether a function is data path or not, and also whether a function will be called per-packet or per-burst. That's only something the programmer will know, and functions called per-packet on the datapath generally need to be inlined for performance. /Bruce /Bruce
On 21/10/2015 16:52, Thomas Monjalon wrote: > 2015-10-21 08:47, Stephen Hemminger: >> On Wed, 21 Oct 2015 09:38:37 +0000 >> "Ananyev, Konstantin" <konstantin.ananyev@intel.com> wrote: >>> I also don't understand what's wrong with using 'always_inline' here. >>> As I understand the author wants compiler to *always inline* that function. >>> So seems perfectly ok to use it here. >>> As I remember just 'inline' is sort of recommendation that compiler is free to ignore. >>> Konstantin >> I follow Linux/Linus advice and resist the urge to add strong inlining. >> The compiler does a good job of deciding to inline, and many times >> the reason it chooses for not inlining are quite good like: >> - the code is on an unlikely branch >> - register pressure means inlining would mean the code would be worse >> >> Therefore my rules are: >> * only use inline for small functions. Let compiler decide on larger static funcs >> * write code where most functions are static (localized scope) where compiler >> can decide >> * reserve always inline for things that access hardware and would break if not inlined. > It would be interesting to do some benchmarks with/without "always" keyword > and add these rules in the coding style guide. > Better test would be to measure the hit by explicitly not having it inlined. You need to know the hit of the compiler making the wrong choice, even if it normally makes the right one. Bruce
> -----Original Message----- > From: Richardson, Bruce > Sent: Wednesday, October 21, 2015 4:56 PM > To: Stephen Hemminger; Ananyev, Konstantin > Cc: Michael S. Tsirkin; dev@dpdk.org; marcel@redhat.com; Changchun Ouyang > Subject: Re: [dpdk-dev] [PATCH v7 4/8] vhost: rxtx: use queue id instead of constant ring index > > > > On 21/10/2015 16:47, Stephen Hemminger wrote: > > On Wed, 21 Oct 2015 09:38:37 +0000 > > "Ananyev, Konstantin" <konstantin.ananyev@intel.com> wrote: > > > >>>> minor nits: > >>>> * this doesn't need to be marked as always inline, > >>>> that is as they say in English "shooting a fly with a bazooka" > >>> Stephen: > >>> always_inline "forces" the compiler to inline this function, like a macro. > >>> When should it be used or is it not preferred at all? > >> I also don't understand what's wrong with using 'always_inline' here. > >> As I understand the author wants compiler to *always inline* that function. > >> So seems perfectly ok to use it here. > >> As I remember just 'inline' is sort of recommendation that compiler is free to ignore. > >> Konstantin > > I follow Linux/Linus advice and resist the urge to add strong inlining. > > The compiler does a good job of deciding to inline, and many times > > the reason it chooses for not inlining are quite good like: > > - the code is on an unlikely branch > > - register pressure means inlining would mean the code would be worse Yep, that's all true, but as I remember 'inline' keyword itself doesn't force compiler to always inline that function. It is more like a recommendation to the compiler. Looking at any dpdk binary, there are plenty of places where function is declared as 'inline', but compiler decided not to and followed standard function call convention for it. Again, from C spec: "6. A function declared with an inline function specifier is an inline function. Making a function an inline function suggests that calls to the function be as fast as possible.138) 7. The extent to which such suggestions are effective is implementation-defined.139) ... 139) For example, an implementation might never perform inline substitution, or might only perform inline substitutions to calls in the scope of an inline declaration." > > > > Therefore my rules are: > > * only use inline for small functions. Let compiler decide on larger static funcs As I remember function we are talking about is really small. > > * write code where most functions are static (localized scope) where compiler > > can decide > > * reserve always inline for things that access hardware and would break if not inlined. Sorry, but the latest rule looks too restrictive to me. Don't see any reason why we all have to follow it. BTW, as I can see there are plenty of always_inline functions inside linux kernel (memory allocator, scheduler, etc). > > > On the other hand, there are cases where we know the compiler will > likely inline, but we also know that not inlining could have a high > performance penalty, and in that case marking as "always inline" would > be appropriate - even though it is likely unnecessary for most > compilers. Yep, totally agree here. If memory serves me right - in the past we observed few noticeable performance drops because of that when switching from one compiler version to another. Konstantin In such a case, I would expect the verification check to be: > explicitly mark the function as *not* to be inlined, and see what the > perf drop is. If it's a noticable drop, marking as always-inline is an > ok precaution against future compiler changes. > > Also, we need to remember that compilers cannot know whether a function > is data path or not, and also whether a function will be called > per-packet or per-burst. That's only something the programmer will know, > and functions called per-packet on the datapath generally need to be > inlined for performance. > > /Bruce > > /Bruce
On 10/21/2015 11:48 AM, Yuanhan Liu wrote: [...] > > #define MAX_PKT_BURST 32 > > +static inline int __attribute__((always_inline)) > +is_valid_virt_queue_idx(uint32_t virtq_idx, int is_tx, uint32_t max_qp_idx) > +{ > + if ((is_tx ^ (virtq_idx & 0x1)) || > + (virtq_idx >= max_qp_idx * VIRTIO_QNUM)) > + return 0; > + > + return 1; > +} > + > /** > * This function adds buffers to the virtio devices RX virtqueue. Buffers can > * be received from the physical port or from another virtio device. A packet > @@ -68,12 +78,14 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, > uint8_t success = 0; > > LOG_DEBUG(VHOST_DATA, "(%"PRIu64") virtio_dev_rx()\n", dev->device_fh); > - if (unlikely(queue_id != VIRTIO_RXQ)) { > - LOG_DEBUG(VHOST_DATA, "mq isn't supported in this version.\n"); > + if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->virt_qp_nb))) { > + RTE_LOG(ERR, VHOST_DATA, > + "%s (%"PRIu64"): virtqueue idx:%d invalid.\n", > + __func__, dev->device_fh, queue_id); > return 0; > } > > - vq = dev->virtqueue[VIRTIO_RXQ]; > + vq = dev->virtqueue[queue_id]; > count = (count > MAX_PKT_BURST) ? MAX_PKT_BURST : count; > > /* > Besides the always_inline issue, i think we should remove the queue_id check here in the "data" path. Caller should guarantee that they pass us the correct queue idx. We could add VHOST_DEBUG macro for the sanity check for debug purpose only. On the other hand, currently we lack of enough check for the guest because there could be malicious guests. Plan to fix this in next release. [...]
On Wed, Oct 21, 2015 at 05:26:18PM +0300, Michael S. Tsirkin wrote: > On Wed, Oct 21, 2015 at 08:48:15PM +0800, Yuanhan Liu wrote: > > > Please note that for virtio devices, guest is supposed to > > > control the placement of incoming packets in RX queues. > > > > I may not follow you. > > > > Enqueuing packets to a RX queue is done at vhost lib, outside the > > guest, how could the guest take the control here? > > > > --yliu > > vhost should do what guest told it to. > > See virtio spec: > 5.1.6.5.5 Automatic receive steering in multiqueue mode Spec says: After the driver transmitted a packet of a flow on transmitqX, the device SHOULD cause incoming packets for that flow to be steered to receiveqX. Michael, I still have no idea how vhost could know the flow even after discussion with Huawei. Could you be more specific about this? Say, how could guest know that? And how could guest tell vhost which RX is gonna to use? Thanks. --yliu
On Thu, Oct 22, 2015 at 05:49:55PM +0800, Yuanhan Liu wrote: > On Wed, Oct 21, 2015 at 05:26:18PM +0300, Michael S. Tsirkin wrote: > > On Wed, Oct 21, 2015 at 08:48:15PM +0800, Yuanhan Liu wrote: > > > > Please note that for virtio devices, guest is supposed to > > > > control the placement of incoming packets in RX queues. > > > > > > I may not follow you. > > > > > > Enqueuing packets to a RX queue is done at vhost lib, outside the > > > guest, how could the guest take the control here? > > > > > > --yliu > > > > vhost should do what guest told it to. > > > > See virtio spec: > > 5.1.6.5.5 Automatic receive steering in multiqueue mode > > Spec says: > > After the driver transmitted a packet of a flow on transmitqX, > the device SHOULD cause incoming packets for that flow to be > steered to receiveqX. > > > Michael, I still have no idea how vhost could know the flow even > after discussion with Huawei. Could you be more specific about > this? Say, how could guest know that? And how could guest tell > vhost which RX is gonna to use? > > Thanks. > > --yliu I don't really understand the question. When guests transmits a packet, it makes a decision about the flow to use, and maps that to a tx/rx pair of queues. It sends packets out on the tx queue and expects device to return packets from the same flow on the rx queue. During transmit, device needs to figure out the flow of packets as they are received from guest, and track which flows go on which tx queue. When it selects the rx queue, it has to use the same table. There is currently no provision for controlling steering for uni-directional flows which are possible e.g. with UDP. We might solve this in a future spec - for example, set a flag notifying guest that steering information is missing for a given flow, for example by setting a flag in a packet, or using the command queue, and have guest send a dummy empty packet to set steering rule for this flow.
On Thu, Oct 22, 2015 at 02:32:31PM +0300, Michael S. Tsirkin wrote: > On Thu, Oct 22, 2015 at 05:49:55PM +0800, Yuanhan Liu wrote: > > On Wed, Oct 21, 2015 at 05:26:18PM +0300, Michael S. Tsirkin wrote: > > > On Wed, Oct 21, 2015 at 08:48:15PM +0800, Yuanhan Liu wrote: > > > > > Please note that for virtio devices, guest is supposed to > > > > > control the placement of incoming packets in RX queues. > > > > > > > > I may not follow you. > > > > > > > > Enqueuing packets to a RX queue is done at vhost lib, outside the > > > > guest, how could the guest take the control here? > > > > > > > > --yliu > > > > > > vhost should do what guest told it to. > > > > > > See virtio spec: > > > 5.1.6.5.5 Automatic receive steering in multiqueue mode > > > > Spec says: > > > > After the driver transmitted a packet of a flow on transmitqX, > > the device SHOULD cause incoming packets for that flow to be > > steered to receiveqX. > > > > > > Michael, I still have no idea how vhost could know the flow even > > after discussion with Huawei. Could you be more specific about > > this? Say, how could guest know that? And how could guest tell > > vhost which RX is gonna to use? > > > > Thanks. > > > > --yliu > > I don't really understand the question. > > When guests transmits a packet, it makes a decision > about the flow to use, and maps that to a tx/rx pair of queues. > > It sends packets out on the tx queue and expects device to > return packets from the same flow on the rx queue. > > During transmit, device needs to figure out the flow > of packets as they are received from guest, and track > which flows go on which tx queue. > When it selects the rx queue, it has to use the same table. Thanks for the length explanation, Michael! I guess the key is are we able to get the table inside vhost-user lib? And, are you looking for something like following? static int rte_vhost_enqueue_burst(pkts) { for_each_pkts(pkt) { int rxq = get_rxq_from_table(pkt); queue_to_rxq(pkt, rxq); } } BTW, there should be such implementation at some where, right? If so, would you please point it to me? In the meanwhile, I will read more doc/code to try to understand it. --yliu > > There is currently no provision for controlling > steering for uni-directional > flows which are possible e.g. with UDP. > > We might solve this in a future spec - for example, set a flag notifying > guest that steering information is missing for a given flow, for example > by setting a flag in a packet, or using the command queue, and have > guest send a dummy empty packet to set steering rule for this flow. > > > -- > MST
On Thu, Oct 22, 2015 at 10:07:10PM +0800, Yuanhan Liu wrote: > On Thu, Oct 22, 2015 at 02:32:31PM +0300, Michael S. Tsirkin wrote: > > On Thu, Oct 22, 2015 at 05:49:55PM +0800, Yuanhan Liu wrote: > > > On Wed, Oct 21, 2015 at 05:26:18PM +0300, Michael S. Tsirkin wrote: > > > > On Wed, Oct 21, 2015 at 08:48:15PM +0800, Yuanhan Liu wrote: > > > > > > Please note that for virtio devices, guest is supposed to > > > > > > control the placement of incoming packets in RX queues. > > > > > > > > > > I may not follow you. > > > > > > > > > > Enqueuing packets to a RX queue is done at vhost lib, outside the > > > > > guest, how could the guest take the control here? > > > > > > > > > > --yliu > > > > > > > > vhost should do what guest told it to. > > > > > > > > See virtio spec: > > > > 5.1.6.5.5 Automatic receive steering in multiqueue mode > > > > > > Spec says: > > > > > > After the driver transmitted a packet of a flow on transmitqX, > > > the device SHOULD cause incoming packets for that flow to be > > > steered to receiveqX. > > > > > > > > > Michael, I still have no idea how vhost could know the flow even > > > after discussion with Huawei. Could you be more specific about > > > this? Say, how could guest know that? And how could guest tell > > > vhost which RX is gonna to use? > > > > > > Thanks. > > > > > > --yliu > > > > I don't really understand the question. > > > > When guests transmits a packet, it makes a decision > > about the flow to use, and maps that to a tx/rx pair of queues. > > > > It sends packets out on the tx queue and expects device to > > return packets from the same flow on the rx queue. > > > > During transmit, device needs to figure out the flow > > of packets as they are received from guest, and track > > which flows go on which tx queue. > > When it selects the rx queue, it has to use the same table. > > Thanks for the length explanation, Michael! > > I guess the key is are we able to get the table inside vhost-user > lib? And, are you looking for something like following? > > static int rte_vhost_enqueue_burst(pkts) > { > for_each_pkts(pkt) { > int rxq = get_rxq_from_table(pkt); > > queue_to_rxq(pkt, rxq); > } > } > > BTW, there should be such implementation at some where, right? > If so, would you please point it to me? See tun_flow_update in drivers/net/tun.c in Linux. > In the meanwhile, I will read more doc/code to try to understand > it. > > --yliu > > > > > There is currently no provision for controlling > > steering for uni-directional > > flows which are possible e.g. with UDP. > > > > We might solve this in a future spec - for example, set a flag notifying > > guest that steering information is missing for a given flow, for example > > by setting a flag in a packet, or using the command queue, and have > > guest send a dummy empty packet to set steering rule for this flow. > > > > > > -- > > MST
On Thu, Oct 22, 2015 at 05:19:01PM +0300, Michael S. Tsirkin wrote: > On Thu, Oct 22, 2015 at 10:07:10PM +0800, Yuanhan Liu wrote: > > On Thu, Oct 22, 2015 at 02:32:31PM +0300, Michael S. Tsirkin wrote: > > > On Thu, Oct 22, 2015 at 05:49:55PM +0800, Yuanhan Liu wrote: > > > > On Wed, Oct 21, 2015 at 05:26:18PM +0300, Michael S. Tsirkin wrote: > > > > > On Wed, Oct 21, 2015 at 08:48:15PM +0800, Yuanhan Liu wrote: > > > > > > > Please note that for virtio devices, guest is supposed to > > > > > > > control the placement of incoming packets in RX queues. > > > > > > > > > > > > I may not follow you. > > > > > > > > > > > > Enqueuing packets to a RX queue is done at vhost lib, outside the > > > > > > guest, how could the guest take the control here? > > > > > > > > > > > > --yliu > > > > > > > > > > vhost should do what guest told it to. > > > > > > > > > > See virtio spec: > > > > > 5.1.6.5.5 Automatic receive steering in multiqueue mode > > > > > > > > Spec says: > > > > > > > > After the driver transmitted a packet of a flow on transmitqX, > > > > the device SHOULD cause incoming packets for that flow to be > > > > steered to receiveqX. > > > > > > > > > > > > Michael, I still have no idea how vhost could know the flow even > > > > after discussion with Huawei. Could you be more specific about > > > > this? Say, how could guest know that? And how could guest tell > > > > vhost which RX is gonna to use? > > > > > > > > Thanks. > > > > > > > > --yliu > > > > > > I don't really understand the question. > > > > > > When guests transmits a packet, it makes a decision > > > about the flow to use, and maps that to a tx/rx pair of queues. > > > > > > It sends packets out on the tx queue and expects device to > > > return packets from the same flow on the rx queue. > > > > > > During transmit, device needs to figure out the flow > > > of packets as they are received from guest, and track > > > which flows go on which tx queue. > > > When it selects the rx queue, it has to use the same table. > > > > Thanks for the length explanation, Michael! > > > > I guess the key is are we able to get the table inside vhost-user > > lib? And, are you looking for something like following? > > > > static int rte_vhost_enqueue_burst(pkts) > > { > > for_each_pkts(pkt) { > > int rxq = get_rxq_from_table(pkt); > > > > queue_to_rxq(pkt, rxq); > > } > > } > > > > BTW, there should be such implementation at some where, right? > > If so, would you please point it to me? > > See tun_flow_update in drivers/net/tun.c in Linux. Thanks. We had a discussion today, and we need implement that. However, the v2.2 merge window is pretty near the end so far. So it's unlikely we could make it in this release. We may add it in v2.3. --yliu > > > > In the meanwhile, I will read more doc/code to try to understand > > it. > > > > --yliu > > > > > > > > There is currently no provision for controlling > > > steering for uni-directional > > > flows which are possible e.g. with UDP. > > > > > > We might solve this in a future spec - for example, set a flag notifying > > > guest that steering information is missing for a given flow, for example > > > by setting a flag in a packet, or using the command queue, and have > > > guest send a dummy empty packet to set steering rule for this flow. > > > > > > > > > -- > > > MST
On Thu, Oct 22, 2015 at 02:32:31PM +0300, Michael S. Tsirkin wrote: > On Thu, Oct 22, 2015 at 05:49:55PM +0800, Yuanhan Liu wrote: > > On Wed, Oct 21, 2015 at 05:26:18PM +0300, Michael S. Tsirkin wrote: > > > On Wed, Oct 21, 2015 at 08:48:15PM +0800, Yuanhan Liu wrote: > > > > > Please note that for virtio devices, guest is supposed to > > > > > control the placement of incoming packets in RX queues. > > > > > > > > I may not follow you. > > > > > > > > Enqueuing packets to a RX queue is done at vhost lib, outside the > > > > guest, how could the guest take the control here? > > > > > > > > --yliu > > > > > > vhost should do what guest told it to. > > > > > > See virtio spec: > > > 5.1.6.5.5 Automatic receive steering in multiqueue mode > > > > Spec says: > > > > After the driver transmitted a packet of a flow on transmitqX, > > the device SHOULD cause incoming packets for that flow to be > > steered to receiveqX. > > > > > > Michael, I still have no idea how vhost could know the flow even > > after discussion with Huawei. Could you be more specific about > > this? Say, how could guest know that? And how could guest tell > > vhost which RX is gonna to use? > > > > Thanks. > > > > --yliu > > I don't really understand the question. > > When guests transmits a packet, it makes a decision > about the flow to use, and maps that to a tx/rx pair of queues. > > It sends packets out on the tx queue and expects device to > return packets from the same flow on the rx queue. Why? I can understand that there should be a mapping between flows and queues in a way that there is no re-ordering, but I can't see the relation of receiving a flow with a TX queue. fbl > During transmit, device needs to figure out the flow > of packets as they are received from guest, and track > which flows go on which tx queue. > When it selects the rx queue, it has to use the same table. > > There is currently no provision for controlling > steering for uni-directional > flows which are possible e.g. with UDP. > > We might solve this in a future spec - for example, set a flag notifying > guest that steering information is missing for a given flow, for example > by setting a flag in a packet, or using the command queue, and have > guest send a dummy empty packet to set steering rule for this flow. > > > -- > MST >
On Sat, Oct 24, 2015 at 12:34:08AM -0200, Flavio Leitner wrote: > On Thu, Oct 22, 2015 at 02:32:31PM +0300, Michael S. Tsirkin wrote: > > On Thu, Oct 22, 2015 at 05:49:55PM +0800, Yuanhan Liu wrote: > > > On Wed, Oct 21, 2015 at 05:26:18PM +0300, Michael S. Tsirkin wrote: > > > > On Wed, Oct 21, 2015 at 08:48:15PM +0800, Yuanhan Liu wrote: > > > > > > Please note that for virtio devices, guest is supposed to > > > > > > control the placement of incoming packets in RX queues. > > > > > > > > > > I may not follow you. > > > > > > > > > > Enqueuing packets to a RX queue is done at vhost lib, outside the > > > > > guest, how could the guest take the control here? > > > > > > > > > > --yliu > > > > > > > > vhost should do what guest told it to. > > > > > > > > See virtio spec: > > > > 5.1.6.5.5 Automatic receive steering in multiqueue mode > > > > > > Spec says: > > > > > > After the driver transmitted a packet of a flow on transmitqX, > > > the device SHOULD cause incoming packets for that flow to be > > > steered to receiveqX. > > > > > > > > > Michael, I still have no idea how vhost could know the flow even > > > after discussion with Huawei. Could you be more specific about > > > this? Say, how could guest know that? And how could guest tell > > > vhost which RX is gonna to use? > > > > > > Thanks. > > > > > > --yliu > > > > I don't really understand the question. > > > > When guests transmits a packet, it makes a decision > > about the flow to use, and maps that to a tx/rx pair of queues. > > > > It sends packets out on the tx queue and expects device to > > return packets from the same flow on the rx queue. > > Why? I can understand that there should be a mapping between > flows and queues in a way that there is no re-ordering, but > I can't see the relation of receiving a flow with a TX queue. > > fbl That's the way virtio chose to program the rx steering logic. It's low overhead (no special commands), and works well for TCP when user is an endpoint since rx and tx for tcp are generally tied (because of ack handling). We can discuss other ways, e.g. special commands for guests to program steering. We'd have to first see some data showing the current scheme is problematic somehow. > > During transmit, device needs to figure out the flow > > of packets as they are received from guest, and track > > which flows go on which tx queue. > > When it selects the rx queue, it has to use the same table. > > > > There is currently no provision for controlling > > steering for uni-directional > > flows which are possible e.g. with UDP. > > > > We might solve this in a future spec - for example, set a flag notifying > > guest that steering information is missing for a given flow, for example > > by setting a flag in a packet, or using the command queue, and have > > guest send a dummy empty packet to set steering rule for this flow. > > > > > > -- > > MST > >
On Sat, Oct 24, 2015 at 08:47:10PM +0300, Michael S. Tsirkin wrote: > On Sat, Oct 24, 2015 at 12:34:08AM -0200, Flavio Leitner wrote: > > On Thu, Oct 22, 2015 at 02:32:31PM +0300, Michael S. Tsirkin wrote: > > > On Thu, Oct 22, 2015 at 05:49:55PM +0800, Yuanhan Liu wrote: > > > > On Wed, Oct 21, 2015 at 05:26:18PM +0300, Michael S. Tsirkin wrote: > > > > > On Wed, Oct 21, 2015 at 08:48:15PM +0800, Yuanhan Liu wrote: > > > > > > > Please note that for virtio devices, guest is supposed to > > > > > > > control the placement of incoming packets in RX queues. > > > > > > > > > > > > I may not follow you. > > > > > > > > > > > > Enqueuing packets to a RX queue is done at vhost lib, outside the > > > > > > guest, how could the guest take the control here? > > > > > > > > > > > > --yliu > > > > > > > > > > vhost should do what guest told it to. > > > > > > > > > > See virtio spec: > > > > > 5.1.6.5.5 Automatic receive steering in multiqueue mode > > > > > > > > Spec says: > > > > > > > > After the driver transmitted a packet of a flow on transmitqX, > > > > the device SHOULD cause incoming packets for that flow to be > > > > steered to receiveqX. > > > > > > > > > > > > Michael, I still have no idea how vhost could know the flow even > > > > after discussion with Huawei. Could you be more specific about > > > > this? Say, how could guest know that? And how could guest tell > > > > vhost which RX is gonna to use? > > > > > > > > Thanks. > > > > > > > > --yliu > > > > > > I don't really understand the question. > > > > > > When guests transmits a packet, it makes a decision > > > about the flow to use, and maps that to a tx/rx pair of queues. > > > > > > It sends packets out on the tx queue and expects device to > > > return packets from the same flow on the rx queue. > > > > Why? I can understand that there should be a mapping between > > flows and queues in a way that there is no re-ordering, but > > I can't see the relation of receiving a flow with a TX queue. > > > > fbl > > That's the way virtio chose to program the rx steering logic. > > It's low overhead (no special commands), and > works well for TCP when user is an endpoint since rx and tx > for tcp are generally tied (because of ack handling). > > We can discuss other ways, e.g. special commands for guests to > program steering. > We'd have to first see some data showing the current scheme > is problematic somehow. The issue is that the restriction imposes operations to be done in the data path. For instance, Open vSwitch has N number of threads to manage X RX queues. We distribute them in round-robin fashion. So, the thread polling one RX queue will do all the packet processing and push it to the TX queue of the other device (vhost-user or not) using the same 'id'. Doing so we can avoid locking between threads and TX queues and any other extra computation while still keeping the packet ordering/distribution fine. However, if vhost-user has to send packets according with guest mapping, it will require locking between queues and additional operations to select the appropriate queue. Those actions will cause performance issues. I see no real benefit from enforcing the guest mapping outside to justify all the computation cost, so my suggestion is to change the spec to suggest that behavior, but not to require that to be compliant. Does that make sense? Thanks, fbl
On Wed, Oct 28, 2015 at 06:30:41PM -0200, Flavio Leitner wrote: > On Sat, Oct 24, 2015 at 08:47:10PM +0300, Michael S. Tsirkin wrote: > > On Sat, Oct 24, 2015 at 12:34:08AM -0200, Flavio Leitner wrote: > > > On Thu, Oct 22, 2015 at 02:32:31PM +0300, Michael S. Tsirkin wrote: > > > > On Thu, Oct 22, 2015 at 05:49:55PM +0800, Yuanhan Liu wrote: > > > > > On Wed, Oct 21, 2015 at 05:26:18PM +0300, Michael S. Tsirkin wrote: > > > > > > On Wed, Oct 21, 2015 at 08:48:15PM +0800, Yuanhan Liu wrote: > > > > > > > > Please note that for virtio devices, guest is supposed to > > > > > > > > control the placement of incoming packets in RX queues. > > > > > > > > > > > > > > I may not follow you. > > > > > > > > > > > > > > Enqueuing packets to a RX queue is done at vhost lib, outside the > > > > > > > guest, how could the guest take the control here? > > > > > > > > > > > > > > --yliu > > > > > > > > > > > > vhost should do what guest told it to. > > > > > > > > > > > > See virtio spec: > > > > > > 5.1.6.5.5 Automatic receive steering in multiqueue mode > > > > > > > > > > Spec says: > > > > > > > > > > After the driver transmitted a packet of a flow on transmitqX, > > > > > the device SHOULD cause incoming packets for that flow to be > > > > > steered to receiveqX. > > > > > > > > > > > > > > > Michael, I still have no idea how vhost could know the flow even > > > > > after discussion with Huawei. Could you be more specific about > > > > > this? Say, how could guest know that? And how could guest tell > > > > > vhost which RX is gonna to use? > > > > > > > > > > Thanks. > > > > > > > > > > --yliu > > > > > > > > I don't really understand the question. > > > > > > > > When guests transmits a packet, it makes a decision > > > > about the flow to use, and maps that to a tx/rx pair of queues. > > > > > > > > It sends packets out on the tx queue and expects device to > > > > return packets from the same flow on the rx queue. > > > > > > Why? I can understand that there should be a mapping between > > > flows and queues in a way that there is no re-ordering, but > > > I can't see the relation of receiving a flow with a TX queue. > > > > > > fbl > > > > That's the way virtio chose to program the rx steering logic. > > > > It's low overhead (no special commands), and > > works well for TCP when user is an endpoint since rx and tx > > for tcp are generally tied (because of ack handling). > > > > We can discuss other ways, e.g. special commands for guests to > > program steering. > > We'd have to first see some data showing the current scheme > > is problematic somehow. > > The issue is that the restriction imposes operations to be done in the > data path. For instance, Open vSwitch has N number of threads to manage > X RX queues. We distribute them in round-robin fashion. So, the thread > polling one RX queue will do all the packet processing and push it to the > TX queue of the other device (vhost-user or not) using the same 'id'. > > Doing so we can avoid locking between threads and TX queues and any other > extra computation while still keeping the packet ordering/distribution fine. > > However, if vhost-user has to send packets according with guest mapping, > it will require locking between queues and additional operations to select > the appropriate queue. Those actions will cause performance issues. You only need to send updates if guest moves a flow to another queue. This is very rare since guest must avoid reordering. Oh and you don't have to have locking. Just update the table and make the target pick up the new value at leasure, worst case a packet ends up in the wrong queue. > I see no real benefit from enforcing the guest mapping outside to > justify all the computation cost, so my suggestion is to change the > spec to suggest that behavior, but not to require that to be compliant. > > Does that make sense? > > Thanks, > fbl It's not a question of what the spec says, it's a question of the quality of implementation: guest needs to be able to balance load between CPUs serving the queues, this means it needs a way to control steering. IMO having dpdk control it makes no sense in the scenario. This is different from dpdk sending packets to real NIC queues which all operate in parallel.
On Wed, Oct 28, 2015 at 11:12:25PM +0200, Michael S. Tsirkin wrote: > On Wed, Oct 28, 2015 at 06:30:41PM -0200, Flavio Leitner wrote: > > On Sat, Oct 24, 2015 at 08:47:10PM +0300, Michael S. Tsirkin wrote: > > > On Sat, Oct 24, 2015 at 12:34:08AM -0200, Flavio Leitner wrote: > > > > On Thu, Oct 22, 2015 at 02:32:31PM +0300, Michael S. Tsirkin wrote: > > > > > On Thu, Oct 22, 2015 at 05:49:55PM +0800, Yuanhan Liu wrote: > > > > > > On Wed, Oct 21, 2015 at 05:26:18PM +0300, Michael S. Tsirkin wrote: > > > > > > > On Wed, Oct 21, 2015 at 08:48:15PM +0800, Yuanhan Liu wrote: > > > > > > > > > Please note that for virtio devices, guest is supposed to > > > > > > > > > control the placement of incoming packets in RX queues. > > > > > > > > > > > > > > > > I may not follow you. > > > > > > > > > > > > > > > > Enqueuing packets to a RX queue is done at vhost lib, outside the > > > > > > > > guest, how could the guest take the control here? > > > > > > > > > > > > > > > > --yliu > > > > > > > > > > > > > > vhost should do what guest told it to. > > > > > > > > > > > > > > See virtio spec: > > > > > > > 5.1.6.5.5 Automatic receive steering in multiqueue mode > > > > > > > > > > > > Spec says: > > > > > > > > > > > > After the driver transmitted a packet of a flow on transmitqX, > > > > > > the device SHOULD cause incoming packets for that flow to be > > > > > > steered to receiveqX. > > > > > > > > > > > > > > > > > > Michael, I still have no idea how vhost could know the flow even > > > > > > after discussion with Huawei. Could you be more specific about > > > > > > this? Say, how could guest know that? And how could guest tell > > > > > > vhost which RX is gonna to use? > > > > > > > > > > > > Thanks. > > > > > > > > > > > > --yliu > > > > > > > > > > I don't really understand the question. > > > > > > > > > > When guests transmits a packet, it makes a decision > > > > > about the flow to use, and maps that to a tx/rx pair of queues. > > > > > > > > > > It sends packets out on the tx queue and expects device to > > > > > return packets from the same flow on the rx queue. > > > > > > > > Why? I can understand that there should be a mapping between > > > > flows and queues in a way that there is no re-ordering, but > > > > I can't see the relation of receiving a flow with a TX queue. > > > > > > > > fbl > > > > > > That's the way virtio chose to program the rx steering logic. > > > > > > It's low overhead (no special commands), and > > > works well for TCP when user is an endpoint since rx and tx > > > for tcp are generally tied (because of ack handling). It is low overhead for the control plane, but not for the data plane. > > > We can discuss other ways, e.g. special commands for guests to > > > program steering. > > > We'd have to first see some data showing the current scheme > > > is problematic somehow. The issue is that the spec assumes the packets are coming in a serialized way and the distribution will be made by vhost-user but that isn't necessarily true. > > The issue is that the restriction imposes operations to be done in the > > data path. For instance, Open vSwitch has N number of threads to manage > > X RX queues. We distribute them in round-robin fashion. So, the thread > > polling one RX queue will do all the packet processing and push it to the > > TX queue of the other device (vhost-user or not) using the same 'id'. > > > > Doing so we can avoid locking between threads and TX queues and any other > > extra computation while still keeping the packet ordering/distribution fine. > > > > However, if vhost-user has to send packets according with guest mapping, > > it will require locking between queues and additional operations to select > > the appropriate queue. Those actions will cause performance issues. > > You only need to send updates if guest moves a flow to another queue. > This is very rare since guest must avoid reordering. OK, maybe I missed something. Could you point me to the spec talking about the update? > Oh and you don't have to have locking. Just update the table and make > the target pick up the new value at leasure, worst case a packet ends up > in the wrong queue. You do because packets are coming on different vswitch queues and they could get mapped to the same virtio queue enforced by the guest, so some sort of synchronization is needed. That is one thing. Another is that it will need some mapping between the hash available in the vswitch (not necessary L2~L4) with the hash/queue mapping provided by the guest. That doesn't require locking, but it's a costly operation. Alternatively, vswitch could calculate full L2-L4 hash which is also a costly operation. Packets ending in the wrong queue isn't that bad, but then we need to enforce processing order because re-ordering is really bad. > > I see no real benefit from enforcing the guest mapping outside to > > justify all the computation cost, so my suggestion is to change the > > spec to suggest that behavior, but not to require that to be compliant. > > > > Does that make sense? > > > > Thanks, > > fbl > > It's not a question of what the spec says, it's a question of the > quality of implementation: guest needs to be able to balance load > between CPUs serving the queues, this means it needs a way to control > steering. Indeed, a mapping could be provided by the guest to steer certain flows to specific queues and of course the implementation must follow that. However, it seems that guest could let that mapping simply open too. > IMO having dpdk control it makes no sense in the scenario. Why not? The only requirement should be that the implemention should avoid re-ordering by keeping the mapping stable between streams and queues. > This is different from dpdk sending packets to real NIC > queues which all operate in parallel. The goal of multiqueue support is to have them working in parallel. fbl
On Mon, Nov 16, 2015 at 02:20:57PM -0800, Flavio Leitner wrote: > On Wed, Oct 28, 2015 at 11:12:25PM +0200, Michael S. Tsirkin wrote: > > On Wed, Oct 28, 2015 at 06:30:41PM -0200, Flavio Leitner wrote: > > > On Sat, Oct 24, 2015 at 08:47:10PM +0300, Michael S. Tsirkin wrote: > > > > On Sat, Oct 24, 2015 at 12:34:08AM -0200, Flavio Leitner wrote: > > > > > On Thu, Oct 22, 2015 at 02:32:31PM +0300, Michael S. Tsirkin wrote: > > > > > > On Thu, Oct 22, 2015 at 05:49:55PM +0800, Yuanhan Liu wrote: > > > > > > > On Wed, Oct 21, 2015 at 05:26:18PM +0300, Michael S. Tsirkin wrote: > > > > > > > > On Wed, Oct 21, 2015 at 08:48:15PM +0800, Yuanhan Liu wrote: > > > > > > > > > > Please note that for virtio devices, guest is supposed to > > > > > > > > > > control the placement of incoming packets in RX queues. > > > > > > > > > > > > > > > > > > I may not follow you. > > > > > > > > > > > > > > > > > > Enqueuing packets to a RX queue is done at vhost lib, outside the > > > > > > > > > guest, how could the guest take the control here? > > > > > > > > > > > > > > > > > > --yliu > > > > > > > > > > > > > > > > vhost should do what guest told it to. > > > > > > > > > > > > > > > > See virtio spec: > > > > > > > > 5.1.6.5.5 Automatic receive steering in multiqueue mode > > > > > > > > > > > > > > Spec says: > > > > > > > > > > > > > > After the driver transmitted a packet of a flow on transmitqX, > > > > > > > the device SHOULD cause incoming packets for that flow to be > > > > > > > steered to receiveqX. > > > > > > > > > > > > > > > > > > > > > Michael, I still have no idea how vhost could know the flow even > > > > > > > after discussion with Huawei. Could you be more specific about > > > > > > > this? Say, how could guest know that? And how could guest tell > > > > > > > vhost which RX is gonna to use? > > > > > > > > > > > > > > Thanks. > > > > > > > > > > > > > > --yliu > > > > > > > > > > > > I don't really understand the question. > > > > > > > > > > > > When guests transmits a packet, it makes a decision > > > > > > about the flow to use, and maps that to a tx/rx pair of queues. > > > > > > > > > > > > It sends packets out on the tx queue and expects device to > > > > > > return packets from the same flow on the rx queue. > > > > > > > > > > Why? I can understand that there should be a mapping between > > > > > flows and queues in a way that there is no re-ordering, but > > > > > I can't see the relation of receiving a flow with a TX queue. > > > > > > > > > > fbl > > > > > > > > That's the way virtio chose to program the rx steering logic. > > > > > > > > It's low overhead (no special commands), and > > > > works well for TCP when user is an endpoint since rx and tx > > > > for tcp are generally tied (because of ack handling). > > It is low overhead for the control plane, but not for the data plane. Well, there's zero data plane overhead within the guest. You can't go lower :) > > > > We can discuss other ways, e.g. special commands for guests to > > > > program steering. > > > > We'd have to first see some data showing the current scheme > > > > is problematic somehow. > > The issue is that the spec assumes the packets are coming in > a serialized way and the distribution will be made by vhost-user > but that isn't necessarily true. > Making the distribution guest controlled is obviously the right thing to do if guest is the endpoint: we need guest scheduler to make the decisions, it's the only entity that knows how are tasks distributed across VCPUs. It's possible that this is not the right thing for when guest is just doing bridging between two VNICs: are you saying packets should just go from RX queue N on eth0 to TX queue N on eth1, making host make all the queue selection decisions? This sounds reasonable. Since there's a mix of local and bridged traffic normally, does this mean we need a per-packet flag that tells host to ignore the packet for classification purposes? > > > The issue is that the restriction imposes operations to be done in the > > > data path. For instance, Open vSwitch has N number of threads to manage > > > X RX queues. We distribute them in round-robin fashion. So, the thread > > > polling one RX queue will do all the packet processing and push it to the > > > TX queue of the other device (vhost-user or not) using the same 'id'. > > > > > > Doing so we can avoid locking between threads and TX queues and any other > > > extra computation while still keeping the packet ordering/distribution fine. > > > > > > However, if vhost-user has to send packets according with guest mapping, > > > it will require locking between queues and additional operations to select > > > the appropriate queue. Those actions will cause performance issues. > > > > You only need to send updates if guest moves a flow to another queue. > > This is very rare since guest must avoid reordering. > > OK, maybe I missed something. Could you point me to the spec talking > about the update? > It doesn't talk about that really - it's an implementation detail. What I am saying is that you can have e.g. a per queue data structure with flows using it. If you find the flow there, then you know nothing changed and there is no need to update other queues. > > Oh and you don't have to have locking. Just update the table and make > > the target pick up the new value at leasure, worst case a packet ends up > > in the wrong queue. > > You do because packets are coming on different vswitch queues and they > could get mapped to the same virtio queue enforced by the guest, so some > sort of synchronization is needed. Right. So to optimize that, you really need a 1:1 mapping, but this optimization only makes sense if guest is not in the end processing these packets in the application on the same CPU - otherwise you are just causing IPIs. With the per-packet flag to bypass the classifier as suggested above, you would do a lookup, find flow is not classified and just forward it 1:1 as you wanted to. > That is one thing. Another is that it will need some mapping between the > hash available in the vswitch (not necessary L2~L4) with the hash/queue > mapping provided by the guest. That doesn't require locking, but it's a > costly operation. Alternatively, vswitch could calculate full L2-L4 hash > which is also a costly operation. > > Packets ending in the wrong queue isn't that bad, but then we need to > enforce processing order because re-ordering is really bad. > Right. So if you consider a mix of packets with guest as endpoint and guest as a bridge, then there's apparently no way out - you need to identify the flow somehow in order to know which is which. I guess one solution is to give up and make it a global decision. But OTOH I think igb supports calculating the RX hash in hardware: it sets NETIF_F_RXHASH on Linux. If so, can't that be used for the initial lookup? > > > I see no real benefit from enforcing the guest mapping outside to > > > justify all the computation cost, so my suggestion is to change the > > > spec to suggest that behavior, but not to require that to be compliant. > > > > > > Does that make sense? > > > > > > Thanks, > > > fbl > > > > It's not a question of what the spec says, it's a question of the > > quality of implementation: guest needs to be able to balance load > > between CPUs serving the queues, this means it needs a way to control > > steering. > > Indeed, a mapping could be provided by the guest to steer certain flows > to specific queues and of course the implementation must follow that. > However, it seems that guest could let that mapping simply open too. Right, we can add such an option in the spec. > > > IMO having dpdk control it makes no sense in the scenario. > > Why not? The only requirement should be that the implemention > should avoid re-ordering by keeping the mapping stable between > streams and queues. Well this depends on whether there's an application within guest that consumes the flow and does something with the data. If yes, then we need to be careful not to compete with that application for CPU, otherwise it won't be able to produce data. I guess that's not the case for pcktgen or forwarding, in these cases networking is all you care about. > > > This is different from dpdk sending packets to real NIC > > queues which all operate in parallel. > > The goal of multiqueue support is to have them working in parallel. > > fbl What I meant is "in parallel with the application doing the actual logic and producing the packets".
On 11/17/2015 04:23 PM, Michael S. Tsirkin wrote: > On Mon, Nov 16, 2015 at 02:20:57PM -0800, Flavio Leitner wrote: >> > On Wed, Oct 28, 2015 at 11:12:25PM +0200, Michael S. Tsirkin wrote: >>> > > On Wed, Oct 28, 2015 at 06:30:41PM -0200, Flavio Leitner wrote: >>>> > > > On Sat, Oct 24, 2015 at 08:47:10PM +0300, Michael S. Tsirkin wrote: >>>>> > > > > On Sat, Oct 24, 2015 at 12:34:08AM -0200, Flavio Leitner wrote: >>>>>> > > > > > On Thu, Oct 22, 2015 at 02:32:31PM +0300, Michael S. Tsirkin wrote: >>>>>>> > > > > > > On Thu, Oct 22, 2015 at 05:49:55PM +0800, Yuanhan Liu wrote: >>>>>>>> > > > > > > > On Wed, Oct 21, 2015 at 05:26:18PM +0300, Michael S. Tsirkin wrote: >>>>>>>>> > > > > > > > > On Wed, Oct 21, 2015 at 08:48:15PM +0800, Yuanhan Liu wrote: >>>>>>>>>>> > > > > > > > > > > Please note that for virtio devices, guest is supposed to >>>>>>>>>>> > > > > > > > > > > control the placement of incoming packets in RX queues. >>>>>>>>>> > > > > > > > > > >>>>>>>>>> > > > > > > > > > I may not follow you. >>>>>>>>>> > > > > > > > > > >>>>>>>>>> > > > > > > > > > Enqueuing packets to a RX queue is done at vhost lib, outside the >>>>>>>>>> > > > > > > > > > guest, how could the guest take the control here? >>>>>>>>>> > > > > > > > > > >>>>>>>>>> > > > > > > > > > --yliu >>>>>>>>> > > > > > > > > >>>>>>>>> > > > > > > > > vhost should do what guest told it to. >>>>>>>>> > > > > > > > > >>>>>>>>> > > > > > > > > See virtio spec: >>>>>>>>> > > > > > > > > 5.1.6.5.5 Automatic receive steering in multiqueue mode >>>>>>>> > > > > > > > >>>>>>>> > > > > > > > Spec says: >>>>>>>> > > > > > > > >>>>>>>> > > > > > > > After the driver transmitted a packet of a flow on transmitqX, >>>>>>>> > > > > > > > the device SHOULD cause incoming packets for that flow to be >>>>>>>> > > > > > > > steered to receiveqX. >>>>>>>> > > > > > > > >>>>>>>> > > > > > > > >>>>>>>> > > > > > > > Michael, I still have no idea how vhost could know the flow even >>>>>>>> > > > > > > > after discussion with Huawei. Could you be more specific about >>>>>>>> > > > > > > > this? Say, how could guest know that? And how could guest tell >>>>>>>> > > > > > > > vhost which RX is gonna to use? >>>>>>>> > > > > > > > >>>>>>>> > > > > > > > Thanks. >>>>>>>> > > > > > > > >>>>>>>> > > > > > > > --yliu >>>>>>> > > > > > > >>>>>>> > > > > > > I don't really understand the question. >>>>>>> > > > > > > >>>>>>> > > > > > > When guests transmits a packet, it makes a decision >>>>>>> > > > > > > about the flow to use, and maps that to a tx/rx pair of queues. >>>>>>> > > > > > > >>>>>>> > > > > > > It sends packets out on the tx queue and expects device to >>>>>>> > > > > > > return packets from the same flow on the rx queue. >>>>>> > > > > > >>>>>> > > > > > Why? I can understand that there should be a mapping between >>>>>> > > > > > flows and queues in a way that there is no re-ordering, but >>>>>> > > > > > I can't see the relation of receiving a flow with a TX queue. >>>>>> > > > > > >>>>>> > > > > > fbl >>>>> > > > > >>>>> > > > > That's the way virtio chose to program the rx steering logic. >>>>> > > > > >>>>> > > > > It's low overhead (no special commands), and >>>>> > > > > works well for TCP when user is an endpoint since rx and tx >>>>> > > > > for tcp are generally tied (because of ack handling). >> > >> > It is low overhead for the control plane, but not for the data plane. > Well, there's zero data plane overhead within the guest. > You can't go lower :) > >>>>> > > > > We can discuss other ways, e.g. special commands for guests to >>>>> > > > > program steering. >>>>> > > > > We'd have to first see some data showing the current scheme >>>>> > > > > is problematic somehow. >> > >> > The issue is that the spec assumes the packets are coming in >> > a serialized way and the distribution will be made by vhost-user >> > but that isn't necessarily true. >> > > Making the distribution guest controlled is obviously the right > thing to do if guest is the endpoint: we need guest scheduler to > make the decisions, it's the only entity that knows > how are tasks distributed across VCPUs. > > It's possible that this is not the right thing for when guest > is just doing bridging between two VNICs: > are you saying packets should just go from RX queue N > on eth0 to TX queue N on eth1, making host make all > the queue selection decisions? The problem looks like current automatic steering policy is not flexible for all kinds of workload in guest. So we can implement the feature of ntuple filters and export the interfaces to let guest/drivers to decide. > > This sounds reasonable. Since there's a mix of local and > bridged traffic normally, does this mean we need > a per-packet flag that tells host to > ignore the packet for classification purposes? This may not work well for all workloads. E.g shot live connections.
On Tue, Nov 17, 2015 at 10:23:38AM +0200, Michael S. Tsirkin wrote: > On Mon, Nov 16, 2015 at 02:20:57PM -0800, Flavio Leitner wrote: > > On Wed, Oct 28, 2015 at 11:12:25PM +0200, Michael S. Tsirkin wrote: > > > On Wed, Oct 28, 2015 at 06:30:41PM -0200, Flavio Leitner wrote: > > > > On Sat, Oct 24, 2015 at 08:47:10PM +0300, Michael S. Tsirkin wrote: > > > > > On Sat, Oct 24, 2015 at 12:34:08AM -0200, Flavio Leitner wrote: > > > > > > On Thu, Oct 22, 2015 at 02:32:31PM +0300, Michael S. Tsirkin wrote: > > > > > > > On Thu, Oct 22, 2015 at 05:49:55PM +0800, Yuanhan Liu wrote: > > > > > > > > On Wed, Oct 21, 2015 at 05:26:18PM +0300, Michael S. Tsirkin wrote: > > > > > > > > > On Wed, Oct 21, 2015 at 08:48:15PM +0800, Yuanhan Liu wrote: > > > > > > > > > > > Please note that for virtio devices, guest is supposed to > > > > > > > > > > > control the placement of incoming packets in RX queues. > > > > > > > > > > > > > > > > > > > > I may not follow you. > > > > > > > > > > > > > > > > > > > > Enqueuing packets to a RX queue is done at vhost lib, outside the > > > > > > > > > > guest, how could the guest take the control here? > > > > > > > > > > > > > > > > > > > > --yliu > > > > > > > > > > > > > > > > > > vhost should do what guest told it to. > > > > > > > > > > > > > > > > > > See virtio spec: > > > > > > > > > 5.1.6.5.5 Automatic receive steering in multiqueue mode > > > > > > > > > > > > > > > > Spec says: > > > > > > > > > > > > > > > > After the driver transmitted a packet of a flow on transmitqX, > > > > > > > > the device SHOULD cause incoming packets for that flow to be > > > > > > > > steered to receiveqX. > > > > > > > > > > > > > > > > > > > > > > > > Michael, I still have no idea how vhost could know the flow even > > > > > > > > after discussion with Huawei. Could you be more specific about > > > > > > > > this? Say, how could guest know that? And how could guest tell > > > > > > > > vhost which RX is gonna to use? > > > > > > > > > > > > > > > > Thanks. > > > > > > > > > > > > > > > > --yliu > > > > > > > > > > > > > > I don't really understand the question. > > > > > > > > > > > > > > When guests transmits a packet, it makes a decision > > > > > > > about the flow to use, and maps that to a tx/rx pair of queues. > > > > > > > > > > > > > > It sends packets out on the tx queue and expects device to > > > > > > > return packets from the same flow on the rx queue. > > > > > > > > > > > > Why? I can understand that there should be a mapping between > > > > > > flows and queues in a way that there is no re-ordering, but > > > > > > I can't see the relation of receiving a flow with a TX queue. > > > > > > > > > > > > fbl > > > > > > > > > > That's the way virtio chose to program the rx steering logic. > > > > > > > > > > It's low overhead (no special commands), and > > > > > works well for TCP when user is an endpoint since rx and tx > > > > > for tcp are generally tied (because of ack handling). > > > > It is low overhead for the control plane, but not for the data plane. > > Well, there's zero data plane overhead within the guest. > You can't go lower :) I agree, but I am talking about vhost-user or whatever means we use to provide packets to the virtio backend. That will have to distribute the packets according to the guest's mapping which is not zero overhead. > > > > > We can discuss other ways, e.g. special commands for guests to > > > > > program steering. > > > > > We'd have to first see some data showing the current scheme > > > > > is problematic somehow. > > > > The issue is that the spec assumes the packets are coming in > > a serialized way and the distribution will be made by vhost-user > > but that isn't necessarily true. > > > > Making the distribution guest controlled is obviously the right > thing to do if guest is the endpoint: we need guest scheduler to > make the decisions, it's the only entity that knows > how are tasks distributed across VCPUs. Again, I agree. My point is that it can also allows no mapping or full freedom. I don't see that as an option now. > It's possible that this is not the right thing for when guest > is just doing bridging between two VNICs: > are you saying packets should just go from RX queue N > on eth0 to TX queue N on eth1, making host make all > the queue selection decisions? The idea is that the guest could TX on queue N and the host would push packets from the same stream on RX queue Y. So, guest is free to send packets on any queue and the host is free to send packet on any queue as long as both keep a stable mapping to avoid re-ordering. What if the guest is not trustable and the host has the requirement to send priority packets to queue#0? That is not possible if backend is forced to follow guest mapping. > This sounds reasonable. Since there's a mix of local and > bridged traffic normally, does this mean we need > a per-packet flag that tells host to > ignore the packet for classification purposes? Real NICs will apply a hash to each coming packet and send out to a specific queue, then a CPU is selected from there. So, the NIC driver or the OS doesn't change that. Same rationale works for virtio-net. Of course, we can use ntuple to force specific streams to go to specific queues, but that isn't the default policy. > > > > The issue is that the restriction imposes operations to be done in the > > > > data path. For instance, Open vSwitch has N number of threads to manage > > > > X RX queues. We distribute them in round-robin fashion. So, the thread > > > > polling one RX queue will do all the packet processing and push it to the > > > > TX queue of the other device (vhost-user or not) using the same 'id'. > > > > > > > > Doing so we can avoid locking between threads and TX queues and any other > > > > extra computation while still keeping the packet ordering/distribution fine. > > > > > > > > However, if vhost-user has to send packets according with guest mapping, > > > > it will require locking between queues and additional operations to select > > > > the appropriate queue. Those actions will cause performance issues. > > > > > > You only need to send updates if guest moves a flow to another queue. > > > This is very rare since guest must avoid reordering. > > > > OK, maybe I missed something. Could you point me to the spec talking > > about the update? > > > > It doesn't talk about that really - it's an implementation > detail. What I am saying is that you can have e.g. > a per queue data structure with flows using it. > If you find the flow there, then you know nothing changed > and there is no need to update other queues. > > > > > > Oh and you don't have to have locking. Just update the table and make > > > the target pick up the new value at leasure, worst case a packet ends up > > > in the wrong queue. > > > > You do because packets are coming on different vswitch queues and they > > could get mapped to the same virtio queue enforced by the guest, so some > > sort of synchronization is needed. > > Right. So to optimize that, you really need a 1:1 mapping, but this > optimization only makes sense if guest is not in the end processing > these packets in the application on the same CPU - otherwise you > are just causing IPIs. Guest should move the apps to the CPU processing the queues. That's what Linux does by default and that's why I am saying the requirement from spec should be about maintaining stable mapping. > With the per-packet flag to bypass the classifier as suggested above, > you would do a lookup, find flow is not classified and just forward > it 1:1 as you wanted to. That is heavy, we can't afford per packet inspection. > > That is one thing. Another is that it will need some mapping between the > > hash available in the vswitch (not necessary L2~L4) with the hash/queue > > mapping provided by the guest. That doesn't require locking, but it's a > > costly operation. Alternatively, vswitch could calculate full L2-L4 hash > > which is also a costly operation. > > > > Packets ending in the wrong queue isn't that bad, but then we need to > > enforce processing order because re-ordering is really bad. > > > > Right. So if you consider a mix of packets with guest as endpoint > and guest as a bridge, then there's apparently no way out - > you need to identify the flow somehow in order to know > which is which. > > I guess one solution is to give up and make it a global > decision. My proposal is to: 1) keep stable flow-to-queue mapping stable by default 2) Respect guest's request to map certain flow to specific queue. > But OTOH I think igb supports calculating the RX hash in hardware: > it sets NETIF_F_RXHASH on Linux. > If so, can't that be used for the initial lookup? Yes, it does. But I can't guarantee all vswitch ports or packets will have a valid rxhash. Even if we decide to use that, we still need to move each packet coming from different vswitch queues to specific virtio queues. (packets crossing queues) > > > > I see no real benefit from enforcing the guest mapping outside to > > > > justify all the computation cost, so my suggestion is to change the > > > > spec to suggest that behavior, but not to require that to be compliant. > > > > > > > > Does that make sense? > > > > > > > > Thanks, > > > > fbl > > > > > > It's not a question of what the spec says, it's a question of the > > > quality of implementation: guest needs to be able to balance load > > > between CPUs serving the queues, this means it needs a way to control > > > steering. > > > > Indeed, a mapping could be provided by the guest to steer certain flows > > to specific queues and of course the implementation must follow that. > > However, it seems that guest could let that mapping simply open too. > > Right, we can add such an option in the spec. :-) > > > IMO having dpdk control it makes no sense in the scenario. > > > > Why not? The only requirement should be that the implemention > > should avoid re-ordering by keeping the mapping stable between > > streams and queues. > > Well this depends on whether there's an application within > guest that consumes the flow and does something with > the data. If yes, then we need to be careful not to > compete with that application for CPU, otherwise > it won't be able to produce data. When you have multiple queues, ideally irqbalance will spread their interrupts to each CPU. So, when a specific queue receives a packet, it will generate an interrupt, which runs a softirq that puts the data into app's socket and schedule the app to run next. So, in summary, the app by default will run on the CPU processing its traffic. > I guess that's not the case for pcktgen or forwarding, > in these cases networking is all you care about. Those use-cases will work regardless. > > > This is different from dpdk sending packets to real NIC > > > queues which all operate in parallel. > > > > The goal of multiqueue support is to have them working in parallel. > > > > fbl > > What I meant is "in parallel with the application doing the > actual logic and producing the packets". fbl
diff --git a/lib/librte_vhost/vhost_rxtx.c b/lib/librte_vhost/vhost_rxtx.c index 7026bfa..14e00ef 100644 --- a/lib/librte_vhost/vhost_rxtx.c +++ b/lib/librte_vhost/vhost_rxtx.c @@ -42,6 +42,16 @@ #define MAX_PKT_BURST 32 +static inline int __attribute__((always_inline)) +is_valid_virt_queue_idx(uint32_t virtq_idx, int is_tx, uint32_t max_qp_idx) +{ + if ((is_tx ^ (virtq_idx & 0x1)) || + (virtq_idx >= max_qp_idx * VIRTIO_QNUM)) + return 0; + + return 1; +} + /** * This function adds buffers to the virtio devices RX virtqueue. Buffers can * be received from the physical port or from another virtio device. A packet @@ -68,12 +78,14 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, uint8_t success = 0; LOG_DEBUG(VHOST_DATA, "(%"PRIu64") virtio_dev_rx()\n", dev->device_fh); - if (unlikely(queue_id != VIRTIO_RXQ)) { - LOG_DEBUG(VHOST_DATA, "mq isn't supported in this version.\n"); + if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->virt_qp_nb))) { + RTE_LOG(ERR, VHOST_DATA, + "%s (%"PRIu64"): virtqueue idx:%d invalid.\n", + __func__, dev->device_fh, queue_id); return 0; } - vq = dev->virtqueue[VIRTIO_RXQ]; + vq = dev->virtqueue[queue_id]; count = (count > MAX_PKT_BURST) ? MAX_PKT_BURST : count; /* @@ -235,8 +247,9 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, } static inline uint32_t __attribute__((always_inline)) -copy_from_mbuf_to_vring(struct virtio_net *dev, uint16_t res_base_idx, - uint16_t res_end_idx, struct rte_mbuf *pkt) +copy_from_mbuf_to_vring(struct virtio_net *dev, uint32_t queue_id, + uint16_t res_base_idx, uint16_t res_end_idx, + struct rte_mbuf *pkt) { uint32_t vec_idx = 0; uint32_t entry_success = 0; @@ -264,7 +277,7 @@ copy_from_mbuf_to_vring(struct virtio_net *dev, uint16_t res_base_idx, * Convert from gpa to vva * (guest physical addr -> vhost virtual addr) */ - vq = dev->virtqueue[VIRTIO_RXQ]; + vq = dev->virtqueue[queue_id]; vb_addr = gpa_to_vva(dev, vq->buf_vec[vec_idx].buf_addr); vb_hdr_addr = vb_addr; @@ -464,11 +477,14 @@ virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id, LOG_DEBUG(VHOST_DATA, "(%"PRIu64") virtio_dev_merge_rx()\n", dev->device_fh); - if (unlikely(queue_id != VIRTIO_RXQ)) { - LOG_DEBUG(VHOST_DATA, "mq isn't supported in this version.\n"); + if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->virt_qp_nb))) { + RTE_LOG(ERR, VHOST_DATA, + "%s (%"PRIu64"): virtqueue idx:%d invalid.\n", + __func__, dev->device_fh, queue_id); + return 0; } - vq = dev->virtqueue[VIRTIO_RXQ]; + vq = dev->virtqueue[queue_id]; count = RTE_MIN((uint32_t)MAX_PKT_BURST, count); if (count == 0) @@ -509,8 +525,8 @@ virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id, res_cur_idx); } while (success == 0); - entry_success = copy_from_mbuf_to_vring(dev, res_base_idx, - res_cur_idx, pkts[pkt_idx]); + entry_success = copy_from_mbuf_to_vring(dev, queue_id, + res_base_idx, res_cur_idx, pkts[pkt_idx]); rte_compiler_barrier(); @@ -562,12 +578,14 @@ rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id, uint16_t free_entries, entry_success = 0; uint16_t avail_idx; - if (unlikely(queue_id != VIRTIO_TXQ)) { - LOG_DEBUG(VHOST_DATA, "mq isn't supported in this version.\n"); + if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->virt_qp_nb))) { + RTE_LOG(ERR, VHOST_DATA, + "%s (%"PRIu64"): virtqueue idx:%d invalid.\n", + __func__, dev->device_fh, queue_id); return 0; } - vq = dev->virtqueue[VIRTIO_TXQ]; + vq = dev->virtqueue[queue_id]; avail_idx = *((volatile uint16_t *)&vq->avail->idx); /* If there are no available buffers then return. */