From patchwork Thu Nov 2 05:47:10 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "John Daley (johndale)" X-Patchwork-Id: 31094 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 95DFD1B398; Thu, 2 Nov 2017 06:47:54 +0100 (CET) Received: from alln-iport-5.cisco.com (alln-iport-5.cisco.com [173.37.142.92]) by dpdk.org (Postfix) with ESMTP id 2D21C1B33E; Thu, 2 Nov 2017 06:47:52 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=3473; q=dns/txt; s=iport; t=1509601673; x=1510811273; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=9n6UvRTJESPdZIuK8nVFy3BltH4U2xOB+/aLGuT/tqI=; b=ZiR24dz2IEppzJ8Twx4th7x5F8cVn8X/5p0RIMakru+1AFs4faguUQyV HDyXnDUjWaPtjPUy1cCgGHDftU2ovAPxkvZcwxmAq6iRjuG9yUjZPDL3C yXzbyVX/3M3/n/8SDMgEdZ7DSEFGkg/MPkxRM44Y5dpGv66S33h2yZweQ Y=; X-IronPort-AV: E=Sophos;i="5.44,332,1505779200"; d="scan'208";a="25007740" Received: from rcdn-core-4.cisco.com ([173.37.93.155]) by alln-iport-5.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 02 Nov 2017 05:47:52 +0000 Received: from cisco.com (savbu-usnic-a.cisco.com [10.193.184.48]) by rcdn-core-4.cisco.com (8.14.5/8.14.5) with ESMTP id vA25lpsS020333; Thu, 2 Nov 2017 05:47:52 GMT Received: by cisco.com (Postfix, from userid 392789) id C8CBB3FAA042; Wed, 1 Nov 2017 22:47:51 -0700 (PDT) From: John Daley To: ferruh.yigit@intel.com Cc: dev@dpdk.org, John Daley , stable@dpdk.org Date: Wed, 1 Nov 2017 22:47:10 -0700 Message-Id: <20171102054710.25010-1-johndale@cisco.com> X-Mailer: git-send-email 2.12.0 In-Reply-To: <20171102015908.6953-1-johndale@cisco.com> References: <20171102015908.6953-1-johndale@cisco.com> Subject: [dpdk-dev] [PATCH v2] net/enic: fix TSO for packets greater than 9208 bytes X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" A check was previously added to drop Tx packets greater than what the Nic is capable of sending since such packets can freeze the send queue. The check did not account for TSO packets however, so TSO was limited to 9208 bytes. Check packet length only for non-TSO packets. Also insure that TSO packet segment size plus the headers do not exceed what the Nic is capable of since this also can freeze the send queue. Use the PKT_TX_TCP_SEG ol_flag instead of m->tso_segsz which is the preferred way to check for TSO. Fixes: ed6e564c214e ("net/enic: fix memory leak with oversized Tx packets") Cc: stable@dpdk.org Signed-off-by: John Daley --- Note that there is some more work to do on enic TSO- the header length is calculated by looking at the packet instead of just trusting mbuf TSO offload header lengths. The 'tx_oversized' stat is used for more than just oversized packets- it gets rolled into 'oerrors' so doesn't matter but the name should be changed. Some TSO tunneling support can be added for newer hardware. These changes will come in the next relase, but hope that this patch can be accepted in 17.11 because it solves existing customer problem. v2: remeved extra parens, found by patchworks checkpatch, but not mine. drivers/net/enic/enic_rxtx.c | 25 +++++++++++++++++++------ 1 file changed, 19 insertions(+), 6 deletions(-) diff --git a/drivers/net/enic/enic_rxtx.c b/drivers/net/enic/enic_rxtx.c index 1d43bde9a..8291865c6 100644 --- a/drivers/net/enic/enic_rxtx.c +++ b/drivers/net/enic/enic_rxtx.c @@ -546,12 +546,15 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint64_t bus_addr; uint8_t offload_mode; uint16_t header_len; + uint64_t tso; + rte_atomic64_t *tx_oversized; enic_cleanup_wq(enic, wq); wq_desc_avail = vnic_wq_desc_avail(wq); head_idx = wq->head_idx; desc_count = wq->ring.desc_count; ol_flags_mask = PKT_TX_VLAN_PKT | PKT_TX_IP_CKSUM | PKT_TX_L4_MASK; + tx_oversized = &enic->soft_stats.tx_oversized; nb_pkts = RTE_MIN(nb_pkts, ENIC_TX_XMIT_MAX); @@ -561,10 +564,12 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, data_len = tx_pkt->data_len; ol_flags = tx_pkt->ol_flags; nb_segs = tx_pkt->nb_segs; + tso = ol_flags & PKT_TX_TCP_SEG; - if (pkt_len > ENIC_TX_MAX_PKT_SIZE) { + /* drop packet if it's too big to send */ + if (unlikely(!tso && pkt_len > ENIC_TX_MAX_PKT_SIZE)) { rte_pktmbuf_free(tx_pkt); - rte_atomic64_inc(&enic->soft_stats.tx_oversized); + rte_atomic64_inc(tx_oversized); continue; } @@ -587,13 +592,21 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, offload_mode = WQ_ENET_OFFLOAD_MODE_CSUM; header_len = 0; - if (tx_pkt->tso_segsz) { + if (tso) { header_len = tso_header_len(tx_pkt); - if (header_len) { - offload_mode = WQ_ENET_OFFLOAD_MODE_TSO; - mss = tx_pkt->tso_segsz; + + /* Drop if non-TCP packet or TSO seg size is too big */ + if (unlikely(header_len == 0 || ((tx_pkt->tso_segsz + + header_len) > ENIC_TX_MAX_PKT_SIZE))) { + rte_pktmbuf_free(tx_pkt); + rte_atomic64_inc(tx_oversized); + continue; } + + offload_mode = WQ_ENET_OFFLOAD_MODE_TSO; + mss = tx_pkt->tso_segsz; } + if ((ol_flags & ol_flags_mask) && (header_len == 0)) { if (ol_flags & PKT_TX_IP_CKSUM) mss |= ENIC_CALC_IP_CKSUM;