[dpdk-dev,5/8] driver/virtio:enqueue vhost TX offload

Message ID 1445402801-27806-6-git-send-email-jijiang.liu@intel.com (mailing list archive)
State Superseded, archived
Headers

Commit Message

Jijiang Liu Oct. 21, 2015, 4:46 a.m. UTC
  Enqueue vhost TX checksum and TSO4/6 offload in virtio-net lib.

Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
 drivers/net/virtio/virtio_rxtx.c |   61 ++++++++++++++++++++++++++++++++++++++
 1 files changed, 61 insertions(+), 0 deletions(-)
  

Comments

David Marchand Oct. 29, 2015, 2:15 p.m. UTC | #1
On Wed, Oct 21, 2015 at 6:46 AM, Jijiang Liu <jijiang.liu@intel.com> wrote:

> @@ -221,6 +277,11 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct
> rte_mbuf *cookie)
>         dxp->cookie = (void *)cookie;
>         dxp->ndescs = needed;
>
> +       if (vtpci_with_feature(txvq->hw, VIRTIO_NET_F_CSUM)) {
> +               if (virtqueue_enqueue_offload(txvq, cookie, idx,
> head_size) < 0)
> +                       return -EPERM;
> +       }
> +
>         start_dp = txvq->vq_ring.desc;
>         start_dp[idx].addr =
>                 txvq->virtio_net_hdr_mem + idx * head_size;
>

If the driver correctly reports negotiated offload capabilities (see my
previous comment on patch 3), there is no need for the test on
VIRTIO_NET_F_CSUM, because application is not supposed to ask for offloads
on driver that do not support them.
Same logic would apply to virtqueue_enqueue_offload() function.

In the end, we could always call this function (or move the code here).
  
Jijiang Liu Oct. 30, 2015, 11:45 a.m. UTC | #2
Hi David,

From: David Marchand [mailto:david.marchand@6wind.com]

Sent: Thursday, October 29, 2015 10:16 PM
To: Liu, Jijiang
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH 5/8] driver/virtio:enqueue vhost TX offload

On Wed, Oct 21, 2015 at 6:46 AM, Jijiang Liu <jijiang.liu@intel.com<mailto:jijiang.liu@intel.com>> wrote:
@@ -221,6 +277,11 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie)
        dxp->cookie = (void *)cookie;
        dxp->ndescs = needed;

+       if (vtpci_with_feature(txvq->hw, VIRTIO_NET_F_CSUM)) {
+               if (virtqueue_enqueue_offload(txvq, cookie, idx, head_size) < 0)
+                       return -EPERM;
+       }
+
        start_dp = txvq->vq_ring.desc;
        start_dp[idx].addr =
                txvq->virtio_net_hdr_mem + idx * head_size;

If the driver correctly reports negotiated offload capabilities (see my previous comment on patch 3), there is no need for the test on VIRTIO_NET_F_CSUM, because application is not supposed to ask for offloads on driver that do not support them.


> If the driver correctly reports negotiated offload capabilities, then application in guest will set the ol_flags in mbuf based on these offload capabilities.

If the VIRTIO_NET_F_CSUM is not enabled, and there is no need to call virtqueue_enqueue_offload() to check ol_flags in mbuf to see if the TX checksum and TSO is set ,and it will  not effect on the performance of disabling TX checksum path as much as possible.
So I think there is need for the check.

> I agree with your comments on patch 3, I will add TX offload capabilities in the dev_info to tell application driver support these offloads.




Same logic would apply to virtqueue_enqueue_offload() function.

In the end, we could always call this function (or move the code here).


--
David Marchand
  
David Marchand Oct. 30, 2015, 12:14 p.m. UTC | #3
On Fri, Oct 30, 2015 at 12:45 PM, Liu, Jijiang <jijiang.liu@intel.com>
wrote:

>
> If the driver correctly reports negotiated offload capabilities (see my
> previous comment on patch 3), there is no need for the test on
> VIRTIO_NET_F_CSUM, because application is not supposed to ask for offloads
> on driver that do not support them.
>
>
>
>
>
> > If the driver correctly reports negotiated offload capabilities, then
> application in guest will set the ol_flags in mbuf based on these offload
> capabilities.
>
> If the VIRTIO_NET_F_CSUM is not enabled, and there is no need to call
> virtqueue_enqueue_offload() to check ol_flags in mbuf to see if the TX
> checksum and TSO is set ,and it will  not effect on the performance of
> disabling TX checksum path as much as possible.
>
> So I think there is need for the check.
>

You are supposed to only handle mbuf with offloads if VIRTIO_NET_F_CSUM was
enabled in the first place through the capabilities.
So looking at ol_flags means that you implicitely check for
VIRTIO_NET_F_CSUM.
This is just an optimisation, so do as you like.

Anyway, I just want to confirm, is this patchset for 2.2 ?
  
Jijiang Liu Oct. 30, 2015, 12:21 p.m. UTC | #4
From: David Marchand [mailto:david.marchand@6wind.com]

Sent: Friday, October 30, 2015 8:15 PM
To: Liu, Jijiang
Cc: dev@dpdk.org; Thomas Monjalon
Subject: Re: [dpdk-dev] [PATCH 5/8] driver/virtio:enqueue vhost TX offload

On Fri, Oct 30, 2015 at 12:45 PM, Liu, Jijiang <jijiang.liu@intel.com<mailto:jijiang.liu@intel.com>> wrote:

If the driver correctly reports negotiated offload capabilities (see my previous comment on patch 3), there is no need for the test on VIRTIO_NET_F_CSUM, because application is not supposed to ask for offloads on driver that do not support them.


> If the driver correctly reports negotiated offload capabilities, then application in guest will set the ol_flags in mbuf based on these offload capabilities.

If the VIRTIO_NET_F_CSUM is not enabled, and there is no need to call virtqueue_enqueue_offload() to check ol_flags in mbuf to see if the TX checksum and TSO is set ,and it will  not effect on the performance of disabling TX checksum path as much as possible.
So I think there is need for the check.

You are supposed to only handle mbuf with offloads if VIRTIO_NET_F_CSUM was enabled in the first place through the capabilities.
So looking at ol_flags means that you implicitely check for VIRTIO_NET_F_CSUM.
This is just an optimisation, so do as you like.
Anyway, I just want to confirm, is this patchset for 2.2 ?
>Yes, I will send new version for this patch set ASAP


--
David Marchand
  

Patch

diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index c5b53bb..b99f5b5 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -50,6 +50,10 @@ 
 #include <rte_string_fns.h>
 #include <rte_errno.h>
 #include <rte_byteorder.h>
+#include <rte_tcp.h>
+#include <rte_ip.h>
+#include <rte_udp.h>
+#include <rte_sctp.h>
 
 #include "virtio_logs.h"
 #include "virtio_ethdev.h"
@@ -199,6 +203,58 @@  virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf *cookie)
 }
 
 static int
+virtqueue_enqueue_offload(struct virtqueue *txvq, struct rte_mbuf *m,
+			uint16_t idx, uint16_t hdr_sz)
+{
+	struct virtio_net_hdr *hdr = (struct virtio_net_hdr *)(uintptr_t)
+				(txvq->virtio_net_hdr_addr + idx * hdr_sz);
+
+	hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
+
+	/* if vhost TX checksum offload is required */
+	if (m->ol_flags & PKT_TX_IP_CKSUM) {
+		hdr->csum_start = m->l2_len;
+		hdr->csum_offset = offsetof(struct ipv4_hdr, hdr_checksum);
+	} else if (m->ol_flags & PKT_TX_L4_MASK) {
+		hdr->csum_start = m->l2_len + m->l3_len;
+		switch (m->ol_flags & PKT_TX_L4_MASK) {
+		case PKT_TX_TCP_CKSUM:
+			hdr->csum_offset = offsetof(struct tcp_hdr, cksum);
+			break;
+		case PKT_TX_UDP_CKSUM:
+			hdr->csum_offset = offsetof(struct udp_hdr,
+							dgram_cksum);
+			break;
+		case PKT_TX_SCTP_CKSUM:
+			hdr->csum_offset = offsetof(struct sctp_hdr, cksum);
+			break;
+		default:
+			break;
+		}
+	} else
+		hdr->flags = 0;
+
+	/* if vhost TSO offload is required */
+	if (m->tso_segsz != 0 && m->ol_flags & PKT_TX_TCP_SEG) {
+		if (m->ol_flags & PKT_TX_IPV4) {
+			if (!vtpci_with_feature(txvq->hw,
+				VIRTIO_NET_F_HOST_TSO4))
+				return -1;
+			hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV4;
+		} else if (m->ol_flags & PKT_TX_IPV6) {
+			if (!vtpci_with_feature(txvq->hw,
+				VIRTIO_NET_F_HOST_TSO6))
+				return -1;
+			hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV6;
+		}
+		hdr->gso_size = m->tso_segsz;
+		hdr->hdr_len = m->l2_len + m->l3_len + m->l4_len;
+	} else
+		hdr->gso_type = VIRTIO_NET_HDR_GSO_NONE;
+	return 0;
+}
+
+static int
 virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie)
 {
 	struct vq_desc_extra *dxp;
@@ -221,6 +277,11 @@  virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie)
 	dxp->cookie = (void *)cookie;
 	dxp->ndescs = needed;
 
+	if (vtpci_with_feature(txvq->hw, VIRTIO_NET_F_CSUM)) {
+		if (virtqueue_enqueue_offload(txvq, cookie, idx, head_size) < 0)
+			return -EPERM;
+	}
+
 	start_dp = txvq->vq_ring.desc;
 	start_dp[idx].addr =
 		txvq->virtio_net_hdr_mem + idx * head_size;