[dpdk-dev,5/8] driver/virtio:enqueue vhost TX offload
Commit Message
Enqueue vhost TX checksum and TSO4/6 offload in virtio-net lib.
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
drivers/net/virtio/virtio_rxtx.c | 61 ++++++++++++++++++++++++++++++++++++++
1 files changed, 61 insertions(+), 0 deletions(-)
Comments
On Wed, Oct 21, 2015 at 6:46 AM, Jijiang Liu <jijiang.liu@intel.com> wrote:
> @@ -221,6 +277,11 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct
> rte_mbuf *cookie)
> dxp->cookie = (void *)cookie;
> dxp->ndescs = needed;
>
> + if (vtpci_with_feature(txvq->hw, VIRTIO_NET_F_CSUM)) {
> + if (virtqueue_enqueue_offload(txvq, cookie, idx,
> head_size) < 0)
> + return -EPERM;
> + }
> +
> start_dp = txvq->vq_ring.desc;
> start_dp[idx].addr =
> txvq->virtio_net_hdr_mem + idx * head_size;
>
If the driver correctly reports negotiated offload capabilities (see my
previous comment on patch 3), there is no need for the test on
VIRTIO_NET_F_CSUM, because application is not supposed to ask for offloads
on driver that do not support them.
Same logic would apply to virtqueue_enqueue_offload() function.
In the end, we could always call this function (or move the code here).
Hi David,
From: David Marchand [mailto:david.marchand@6wind.com]
Sent: Thursday, October 29, 2015 10:16 PM
To: Liu, Jijiang
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH 5/8] driver/virtio:enqueue vhost TX offload
On Wed, Oct 21, 2015 at 6:46 AM, Jijiang Liu <jijiang.liu@intel.com<mailto:jijiang.liu@intel.com>> wrote:
@@ -221,6 +277,11 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie)
dxp->cookie = (void *)cookie;
dxp->ndescs = needed;
+ if (vtpci_with_feature(txvq->hw, VIRTIO_NET_F_CSUM)) {
+ if (virtqueue_enqueue_offload(txvq, cookie, idx, head_size) < 0)
+ return -EPERM;
+ }
+
start_dp = txvq->vq_ring.desc;
start_dp[idx].addr =
txvq->virtio_net_hdr_mem + idx * head_size;
If the driver correctly reports negotiated offload capabilities (see my previous comment on patch 3), there is no need for the test on VIRTIO_NET_F_CSUM, because application is not supposed to ask for offloads on driver that do not support them.
> If the driver correctly reports negotiated offload capabilities, then application in guest will set the ol_flags in mbuf based on these offload capabilities.
If the VIRTIO_NET_F_CSUM is not enabled, and there is no need to call virtqueue_enqueue_offload() to check ol_flags in mbuf to see if the TX checksum and TSO is set ,and it will not effect on the performance of disabling TX checksum path as much as possible.
So I think there is need for the check.
> I agree with your comments on patch 3, I will add TX offload capabilities in the dev_info to tell application driver support these offloads.
Same logic would apply to virtqueue_enqueue_offload() function.
In the end, we could always call this function (or move the code here).
--
David Marchand
On Fri, Oct 30, 2015 at 12:45 PM, Liu, Jijiang <jijiang.liu@intel.com>
wrote:
>
> If the driver correctly reports negotiated offload capabilities (see my
> previous comment on patch 3), there is no need for the test on
> VIRTIO_NET_F_CSUM, because application is not supposed to ask for offloads
> on driver that do not support them.
>
>
>
>
>
> > If the driver correctly reports negotiated offload capabilities, then
> application in guest will set the ol_flags in mbuf based on these offload
> capabilities.
>
> If the VIRTIO_NET_F_CSUM is not enabled, and there is no need to call
> virtqueue_enqueue_offload() to check ol_flags in mbuf to see if the TX
> checksum and TSO is set ,and it will not effect on the performance of
> disabling TX checksum path as much as possible.
>
> So I think there is need for the check.
>
You are supposed to only handle mbuf with offloads if VIRTIO_NET_F_CSUM was
enabled in the first place through the capabilities.
So looking at ol_flags means that you implicitely check for
VIRTIO_NET_F_CSUM.
This is just an optimisation, so do as you like.
Anyway, I just want to confirm, is this patchset for 2.2 ?
From: David Marchand [mailto:david.marchand@6wind.com]
Sent: Friday, October 30, 2015 8:15 PM
To: Liu, Jijiang
Cc: dev@dpdk.org; Thomas Monjalon
Subject: Re: [dpdk-dev] [PATCH 5/8] driver/virtio:enqueue vhost TX offload
On Fri, Oct 30, 2015 at 12:45 PM, Liu, Jijiang <jijiang.liu@intel.com<mailto:jijiang.liu@intel.com>> wrote:
If the driver correctly reports negotiated offload capabilities (see my previous comment on patch 3), there is no need for the test on VIRTIO_NET_F_CSUM, because application is not supposed to ask for offloads on driver that do not support them.
> If the driver correctly reports negotiated offload capabilities, then application in guest will set the ol_flags in mbuf based on these offload capabilities.
If the VIRTIO_NET_F_CSUM is not enabled, and there is no need to call virtqueue_enqueue_offload() to check ol_flags in mbuf to see if the TX checksum and TSO is set ,and it will not effect on the performance of disabling TX checksum path as much as possible.
So I think there is need for the check.
You are supposed to only handle mbuf with offloads if VIRTIO_NET_F_CSUM was enabled in the first place through the capabilities.
So looking at ol_flags means that you implicitely check for VIRTIO_NET_F_CSUM.
This is just an optimisation, so do as you like.
Anyway, I just want to confirm, is this patchset for 2.2 ?
>Yes, I will send new version for this patch set ASAP
--
David Marchand
@@ -50,6 +50,10 @@
#include <rte_string_fns.h>
#include <rte_errno.h>
#include <rte_byteorder.h>
+#include <rte_tcp.h>
+#include <rte_ip.h>
+#include <rte_udp.h>
+#include <rte_sctp.h>
#include "virtio_logs.h"
#include "virtio_ethdev.h"
@@ -199,6 +203,58 @@ virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf *cookie)
}
static int
+virtqueue_enqueue_offload(struct virtqueue *txvq, struct rte_mbuf *m,
+ uint16_t idx, uint16_t hdr_sz)
+{
+ struct virtio_net_hdr *hdr = (struct virtio_net_hdr *)(uintptr_t)
+ (txvq->virtio_net_hdr_addr + idx * hdr_sz);
+
+ hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
+
+ /* if vhost TX checksum offload is required */
+ if (m->ol_flags & PKT_TX_IP_CKSUM) {
+ hdr->csum_start = m->l2_len;
+ hdr->csum_offset = offsetof(struct ipv4_hdr, hdr_checksum);
+ } else if (m->ol_flags & PKT_TX_L4_MASK) {
+ hdr->csum_start = m->l2_len + m->l3_len;
+ switch (m->ol_flags & PKT_TX_L4_MASK) {
+ case PKT_TX_TCP_CKSUM:
+ hdr->csum_offset = offsetof(struct tcp_hdr, cksum);
+ break;
+ case PKT_TX_UDP_CKSUM:
+ hdr->csum_offset = offsetof(struct udp_hdr,
+ dgram_cksum);
+ break;
+ case PKT_TX_SCTP_CKSUM:
+ hdr->csum_offset = offsetof(struct sctp_hdr, cksum);
+ break;
+ default:
+ break;
+ }
+ } else
+ hdr->flags = 0;
+
+ /* if vhost TSO offload is required */
+ if (m->tso_segsz != 0 && m->ol_flags & PKT_TX_TCP_SEG) {
+ if (m->ol_flags & PKT_TX_IPV4) {
+ if (!vtpci_with_feature(txvq->hw,
+ VIRTIO_NET_F_HOST_TSO4))
+ return -1;
+ hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV4;
+ } else if (m->ol_flags & PKT_TX_IPV6) {
+ if (!vtpci_with_feature(txvq->hw,
+ VIRTIO_NET_F_HOST_TSO6))
+ return -1;
+ hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV6;
+ }
+ hdr->gso_size = m->tso_segsz;
+ hdr->hdr_len = m->l2_len + m->l3_len + m->l4_len;
+ } else
+ hdr->gso_type = VIRTIO_NET_HDR_GSO_NONE;
+ return 0;
+}
+
+static int
virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie)
{
struct vq_desc_extra *dxp;
@@ -221,6 +277,11 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie)
dxp->cookie = (void *)cookie;
dxp->ndescs = needed;
+ if (vtpci_with_feature(txvq->hw, VIRTIO_NET_F_CSUM)) {
+ if (virtqueue_enqueue_offload(txvq, cookie, idx, head_size) < 0)
+ return -EPERM;
+ }
+
start_dp = txvq->vq_ring.desc;
start_dp[idx].addr =
txvq->virtio_net_hdr_mem + idx * head_size;