From patchwork Thu Nov 2 16:42:47 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 31124 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EB3391B666; Thu, 2 Nov 2017 17:43:22 +0100 (CET) Received: from EUR03-AM5-obe.outbound.protection.outlook.com (mail-eopbgr30049.outbound.protection.outlook.com [40.107.3.49]) by dpdk.org (Postfix) with ESMTP id 3CE611B659 for ; Thu, 2 Nov 2017 17:43:20 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=mUb/WO88xwaZIn2Aj8JWPHjmakoaSGs3SVxcHP2gQ9Y=; b=G+2cBzd00QezL+oZodmvV0CBQtkngRS+y9PTcOi4vz6+qCMFVeelupShaqyH6U0VFOX7EeuCyltwGPPhxttzuUYg1H/1WldsAqwYRWoSzwOSzo9ph2GTOveJsUduY6+pSP9xyimG7NuaAOcx+QIKFqd9OWbkqmW0bnh2x/6cSzw= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=matan@mellanox.com; Received: from mellanox.com (37.142.13.130) by HE1PR0502MB3659.eurprd05.prod.outlook.com (2603:10a6:7:85::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.197.13; Thu, 2 Nov 2017 16:43:17 +0000 From: Matan Azrad To: Adrien Mazarguil Cc: dev@dpdk.org, Ophir Munk Date: Thu, 2 Nov 2017 16:42:47 +0000 Message-Id: <1509640971-8637-5-git-send-email-matan@mellanox.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1509640971-8637-1-git-send-email-matan@mellanox.com> References: <1509358049-18854-1-git-send-email-matan@mellanox.com> <1509640971-8637-1-git-send-email-matan@mellanox.com> MIME-Version: 1.0 X-Originating-IP: [37.142.13.130] X-ClientProxiedBy: VI1PR0801CA0068.eurprd08.prod.outlook.com (2603:10a6:800:7d::12) To HE1PR0502MB3659.eurprd05.prod.outlook.com (2603:10a6:7:85::17) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 0ef0c124-0ad9-49da-5f72-08d52210d261 X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(48565401081)(2017052603199); SRVR:HE1PR0502MB3659; X-Microsoft-Exchange-Diagnostics: 1; HE1PR0502MB3659; 3:YUU4SASPzsUxbRX1pZ9XbW+6sBHw4PoSTpLSC7oe1tj7Tn5rDNPGMIhTZDOtHSnogxt+gxa6Fn5tFbv+z3fFYVAXSzpCu/dKjk5ehu925D1cIxIsiH/W1p49DNjy4xwFNpge3O+aLz2fXjEIUsi/xzDcuaoivfk0HQPtiGJUSYudC8MykiMac5pSc9YWbztAwSGhmzMW9KcdDGHs7MGhgaHRXUzEM2Bc71UHiPsg17B5oKuERA7z8IPuqN0yY5yB; 25:Jgyd/nvAmSWmbEMsipKGIaNXhaYFCa6FCYWBHKsa4B1hAh8+kMkWCOBeeTI6aPlJbWgcNUu//3VBLo+snxGr3OdmDbD4PJe5t35X0vHagLsQ3GVEIz7vHfSXZRADlQ3ZOCQyFKheW+6yhcSODmPodM8rVlykcDbz/PCoAAjlbpW5NjPg0+tWFHXKHmJBYeG3WZwD50kXi2ld+8GCW73y7TaF9KIPo0xo2DKjYgv2O57prMrbWxm0MvMeGReI9PAxulcC/xGt6E6xJvddm8NiCWOhPAm+5myeWRJ5D+MjPMNPsSmWxfev1HZG3Lg9aXmH0vTFShKwbHzPyvUQ+VgAuw==; 31:kV88w4VZfih7yT3APjCQza/XzhX+x/rNykRATUvSkCzRV6vQoHJfzUlP0eZru8V5staz7M7uHEZ14hEM5j8L7me9F8Cf2WOp6/XqD35lTBBJ9jF0B/QH1muGMwRDimantABDGrv3B0oJ88V+YVlUOiABFobefuplBkWMv3q1tpbaLxmu1ZoLuP7v+0746OhUAz/9a5idDB1Hf5epJAXXtwoBEt0iiImoLZS5LIUkjtE= X-MS-TrafficTypeDiagnostic: HE1PR0502MB3659: X-LD-Processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr X-Microsoft-Exchange-Diagnostics: 1; HE1PR0502MB3659; 20:tiPxR/lkgHjRafsRcwhq1sDX4SavcScHhItb/tRfpYFhcdeDWJu8HoY2HZGlSyw63xZYWIxKCwekCE6gYgozMPzse5maw/uVSYEM77NpQLQuIhpGDmtQWY1rpA2XYho3aJAyitB/Lwqd031Pz+9LZwf+n4JShoBxieL7uWn9JxKs3hYPcE3SfFyJuEi9rJASWN5cykm5UoUYjhKo9ml3WCGzAd6i9VdHN9lwY9udRbuLABMaHgXMelLjoN6yB+7Cvbm8pBXbPO8lBOKdsWfYx0LvNf/BEizwVRCHR0//VN/KLt+e92AICXK3bm8O46uxFN1VM6S0OvAa32ncwT74V1mhLxm2d/iW3hOlI0YeOCHFMaRcQQPVGCVVYnGVcyK7Vl5SvugYZYWsdE0z0GE5nggBiXkJ69OP6l1PpnWeby4kLDkqVxgaEfwwgGPn02PHGDYFzUYEcp+MSpDZf5Edw+t5iJCLeu2gBvHrZoCf49t5+pwQcrwpXCwBmhjnvNH8; 4:XlFXiooNiaDQMxE4Bc0WIe0Fup36jDAmaVouV8UDGwhUbrPEk8f4ICD7t/4Ox2ZbCnioRPUaKQ0yoDqD9JBcnueAfqJj8kysYyNMT+EQlkMD6Nkd4t9xAgWtzz5WILjq5lllqYQkGk1uQaete+/Fmo7tVtvM8Dn5e4tlJnM6fuA+1WnUQv3Y+Ydf/LHvMquae/9L4NeNMrfSo7sGbex+D6VP9FFOC8gFMOATCHbOhHbOy1Y7JMfU/UAYMTHpBvrpK/pltZxozM+ZqC1TeupXdbxEEw3S37NguEyhnETb+vNEXhMvXpaQ0m6ztkexR75N X-Exchange-Antispam-Report-Test: UriScan:(60795455431006); X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(5005006)(8121501046)(10201501046)(3231020)(93006095)(93001095)(3002001)(100000703101)(100105400095)(6055026)(6041248)(20161123560025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123558100)(20161123555025)(20161123564025)(20161123562025)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:HE1PR0502MB3659; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:HE1PR0502MB3659; X-Forefront-PRVS: 047999FF16 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6009001)(376002)(346002)(39860400002)(189002)(199003)(16586007)(50226002)(316002)(106356001)(105586002)(101416001)(21086003)(25786009)(8936002)(478600001)(4720700003)(8676002)(81156014)(189998001)(36756003)(33026002)(81166006)(97736004)(4326008)(5003940100001)(2906002)(55016002)(68736007)(33646002)(5660300001)(305945005)(107886003)(7736002)(50466002)(53936002)(48376002)(66066001)(16526018)(2950100002)(6916009)(6666003)(50986999)(47776003)(76176999)(3846002)(86362001)(69596002)(575784001)(6116002); DIR:OUT; SFP:1101; SCL:1; SRVR:HE1PR0502MB3659; H:mellanox.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; HE1PR0502MB3659; 23:k+EMPEO5rTD7UIuNWPmhnkS0kP4iIzuU1M3tIq1?= PdU1c5tY1oY3pP7WCI/1UeA1MdMNAGedwnDROGSS3JNiKaNfPARybs55B4cYV2gIPljOOpLsk3vxJjVQ2Q1rF3SM7bg/5kRdUuMQwqQ+UVzGmspu9xxxTmixvwndC1WQrHF28EQsqu5cinJ9yJpl7ukzPKeqHrZMv8lP624SDTEmmOTMioOuLCwu8Bf4WOiwIQRDtds6xrlda9WCqfnWadKTxiC6oR+WRJkmuwrcfLluD6fnuOMvZgM5ff3jDTwhReCp3fVeM5PcZ3OMbBmRkaZJvx6vSb8X5+FDzZdNQM63Ed15qrH+oJmBP8+Dvo5MQmECh6uqt3uAtCbx+R1aLCXKqjzbkOsFxhth19DKoMVeQdHF2b9gyROAI+IW311XUhbd/ItGzmBEKx1ZVadFSpBWR+ejH21Uaoh7p+0DQBe+86gQljGVgrjKJAiE02Rklg3a4a4moe2rlC2BeVMx6OCI/rdT8BZZO+YALM82D34fRGPO5UU95lXoAF14xaE3ecDNd2BrszP5Ze2ZPvd5uLGyE9EHdXpEbHSbt0MrOYfg/7vXtVphRYepvHQOERkQvUzO8WZUVMIQw3V0iZAtiuwujB69hErtDr9ngLyTiAYDhY2fD1D70sBmMQbo2U8x0qMa1db5+Zhh55vC8RSzVNOEQw21hLpfxTc+SekkOOLY3ittz7EqN2VUVGKtNobdbLjOC3eKE7duUK8kjph9bSsEQxeGtHJhmRJlTY5ZB6J5fwTOHJUYX7fKS30SJut2HJuQNksJ/5DFJcklxkFI9FDt1rk/d/FJtGgVy4sEL881iP5qZyLIaLHEmj2P/tJCHoqaq5rwDvScZYQv3MvW0Wj9Fsyx7aZjjafdkc5V2pMH/ELeIJ0YWUi8gxuncsqUy1DSgo4CEeQFC0UkpVgEl1vherUoO5S/mwZy0BvoXzfAJfxSkoLg54obXfKUzGJ/PLpBhetQPjPGlsFMHg25H9DlmVPJpFaYsdz986YFzBaYkEJ2zYe6qVfs5o1pa8IVXywdSf55uGr72aLcoejkjhLCJzdFxg6tdmLRPrlj481SPcxv+yTzuH8X7VJG4zZLX5VeVXCEkDqbV0fLENzmfv9gzrS5zLqKSvzK+DtUXi6T1As4mEI8X7WPggR5VpPLUOt8= X-Microsoft-Exchange-Diagnostics: 1; HE1PR0502MB3659; 6:1oa7iIDWxIsseCz+U5kG2zeP5rNmKNQxTSiE9Xd/vtsfSbNyiTkwYRvhdhoJZieyn7o//jydGLkoMU/HgIwkfDkKwHfwZyEigqxiWhPIjGnHGS/VC70S4pSAkBEMvhdBmhAQFYmnjq2PXxpBvjdLGHeafqiEhsMROUzEyCu7NrBx4uqXdiZIWui4Zw/Pe2+IWbfeyMdToWMugda1eTlW6w0LdE/m8+D+XE8Qkd2SzHytFA7DMmQmDpV5XfepociXxAC+Ish7oOUV+gmPm++5AKqd4uaj/KiD49CPxLuWbLOxxN5sMXL6lA299XIvTNAhUqj8qzzdoGW/8a9OCtoY9J8hZ0bmNS5/YvYPjEwMf5E=; 5:F6JZblehXUbpQpCmgUxAd17yvHWtUcSRPZ/C5VHtGAhGlO5s3o7ERKg1d2YmhpPcrcf94T4VulwJmGXr77n5AmtgJfXl37IUn+Y2PaIqtPljPO8C/F4N7e/2iXDFieCZLUfoJjnNr1mPQCBv8+WSKgsyh6JuliX6C0Jg6i1Bxf8=; 24:coYV0G4xogH0mTJg9BejVe2xuuN7zwzssCZ2UCst1QwfVZ9SPOKZ3HG+Ry7ZldgpLn4wdcKCjogQq5I7cTr6p3JeNXPNNkqtDz73G01drPQ=; 7:5kq0QSrRGjKeznr7rHTzhl4osdou3qU6Ko/2miygtw8n06iwUrGVuOEVNkrT1fR8uzi4MlGaDLQDLcTHW/5hwHuPcwgSLZO7mXecBjZYns05NWOVod0ZMAvGZZopHaxHiSajRglc1Z9GQ2MCd0UdEYfisqymUA+mFBFYJ6B1YNyAj2UtPGMOgi9dswLUFojRb1MFhutroyYH2bDCLOO+UHo4MooHtF0uhpafTNCYENg+uxoJO4uoIblufBj/53M3 SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Nov 2017 16:43:17.9554 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0ef0c124-0ad9-49da-5f72-08d52210d261 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0502MB3659 Subject: [dpdk-dev] [PATCH v5 4/8] net/mlx4: merge Tx path functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Merge tx_burst and mlx4_post_send functions to prevent double asking about WQ remain space. Signed-off-by: Matan Azrad Acked-by: Adrien Mazarguil --- drivers/net/mlx4/mlx4_rxtx.c | 355 +++++++++++++++++++++---------------------- 1 file changed, 170 insertions(+), 185 deletions(-) diff --git a/drivers/net/mlx4/mlx4_rxtx.c b/drivers/net/mlx4/mlx4_rxtx.c index 3169fe5..e0afbea 100644 --- a/drivers/net/mlx4/mlx4_rxtx.c +++ b/drivers/net/mlx4/mlx4_rxtx.c @@ -238,185 +238,6 @@ struct pv { } /** - * Posts a single work request to a send queue. - * - * @param txq - * Target Tx queue. - * @param pkt - * Packet to transmit. - * - * @return - * 0 on success, negative errno value otherwise. - */ -static inline int -mlx4_post_send(struct txq *txq, struct rte_mbuf *pkt) -{ - struct mlx4_wqe_ctrl_seg *ctrl; - struct mlx4_wqe_data_seg *dseg; - struct mlx4_sq *sq = &txq->msq; - struct rte_mbuf *buf; - union { - uint32_t flags; - uint16_t flags16[2]; - } srcrb; - uint32_t head_idx = sq->head & sq->txbb_cnt_mask; - uint32_t lkey; - uintptr_t addr; - uint32_t owner_opcode = MLX4_OPCODE_SEND; - uint32_t byte_count; - int wqe_real_size; - int nr_txbbs; - struct pv *pv = (struct pv *)txq->bounce_buf; - int pv_counter = 0; - - /* Calculate the needed work queue entry size for this packet. */ - wqe_real_size = sizeof(struct mlx4_wqe_ctrl_seg) + - pkt->nb_segs * sizeof(struct mlx4_wqe_data_seg); - nr_txbbs = MLX4_SIZE_TO_TXBBS(wqe_real_size); - /* - * Check that there is room for this WQE in the send queue and that - * the WQE size is legal. - */ - if (((sq->head - sq->tail) + nr_txbbs + - sq->headroom_txbbs) >= sq->txbb_cnt || - nr_txbbs > MLX4_MAX_WQE_TXBBS) { - return -ENOSPC; - } - /* Get the control and data entries of the WQE. */ - ctrl = (struct mlx4_wqe_ctrl_seg *)mlx4_get_send_wqe(sq, head_idx); - dseg = (struct mlx4_wqe_data_seg *)((uintptr_t)ctrl + - sizeof(struct mlx4_wqe_ctrl_seg)); - /* Fill the data segments with buffer information. */ - for (buf = pkt; buf != NULL; buf = buf->next, dseg++) { - addr = rte_pktmbuf_mtod(buf, uintptr_t); - rte_prefetch0((volatile void *)addr); - /* Handle WQE wraparound. */ - if (dseg >= (struct mlx4_wqe_data_seg *)sq->eob) - dseg = (struct mlx4_wqe_data_seg *)sq->buf; - dseg->addr = rte_cpu_to_be_64(addr); - /* Memory region key for this memory pool. */ - lkey = mlx4_txq_mp2mr(txq, mlx4_txq_mb2mp(buf)); -#ifndef NDEBUG - if (unlikely(lkey == (uint32_t)-1)) { - /* MR does not exist. */ - DEBUG("%p: unable to get MP <-> MR association", - (void *)txq); - /* - * Restamp entry in case of failure. - * Make sure that size is written correctly - * Note that we give ownership to the SW, not the HW. - */ - ctrl->fence_size = (wqe_real_size >> 4) & 0x3f; - mlx4_txq_stamp_freed_wqe(sq, head_idx, - (sq->head & sq->txbb_cnt) ? 0 : 1); - return -EFAULT; - } -#endif /* NDEBUG */ - dseg->lkey = rte_cpu_to_be_32(lkey); - if (likely(buf->data_len)) { - byte_count = rte_cpu_to_be_32(buf->data_len); - } else { - /* - * Zero length segment is treated as inline segment - * with zero data. - */ - byte_count = RTE_BE32(0x80000000); - } - /* - * If the data segment is not at the beginning of a - * Tx basic block (TXBB) then write the byte count, - * else postpone the writing to just before updating the - * control segment. - */ - if ((uintptr_t)dseg & (uintptr_t)(MLX4_TXBB_SIZE - 1)) { - /* - * Need a barrier here before writing the byte_count - * fields to make sure that all the data is visible - * before the byte_count field is set. - * Otherwise, if the segment begins a new cacheline, - * the HCA prefetcher could grab the 64-byte chunk and - * get a valid (!= 0xffffffff) byte count but stale - * data, and end up sending the wrong data. - */ - rte_io_wmb(); - dseg->byte_count = byte_count; - } else { - /* - * This data segment starts at the beginning of a new - * TXBB, so we need to postpone its byte_count writing - * for later. - */ - pv[pv_counter].dseg = dseg; - pv[pv_counter++].val = byte_count; - } - } - /* Write the first DWORD of each TXBB save earlier. */ - if (pv_counter) { - /* Need a barrier here before writing the byte_count. */ - rte_io_wmb(); - for (--pv_counter; pv_counter >= 0; pv_counter--) - pv[pv_counter].dseg->byte_count = pv[pv_counter].val; - } - /* Fill the control parameters for this packet. */ - ctrl->fence_size = (wqe_real_size >> 4) & 0x3f; - /* - * For raw Ethernet, the SOLICIT flag is used to indicate that no ICRC - * should be calculated. - */ - txq->elts_comp_cd -= nr_txbbs; - if (unlikely(txq->elts_comp_cd <= 0)) { - txq->elts_comp_cd = txq->elts_comp_cd_init; - srcrb.flags = RTE_BE32(MLX4_WQE_CTRL_SOLICIT | - MLX4_WQE_CTRL_CQ_UPDATE); - } else { - srcrb.flags = RTE_BE32(MLX4_WQE_CTRL_SOLICIT); - } - /* Enable HW checksum offload if requested */ - if (txq->csum && - (pkt->ol_flags & - (PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM))) { - const uint64_t is_tunneled = (pkt->ol_flags & - (PKT_TX_TUNNEL_GRE | - PKT_TX_TUNNEL_VXLAN)); - - if (is_tunneled && txq->csum_l2tun) { - owner_opcode |= MLX4_WQE_CTRL_IIP_HDR_CSUM | - MLX4_WQE_CTRL_IL4_HDR_CSUM; - if (pkt->ol_flags & PKT_TX_OUTER_IP_CKSUM) - srcrb.flags |= - RTE_BE32(MLX4_WQE_CTRL_IP_HDR_CSUM); - } else { - srcrb.flags |= RTE_BE32(MLX4_WQE_CTRL_IP_HDR_CSUM | - MLX4_WQE_CTRL_TCP_UDP_CSUM); - } - } - if (txq->lb) { - /* - * Copy destination MAC address to the WQE, this allows - * loopback in eSwitch, so that VFs and PF can communicate - * with each other. - */ - srcrb.flags16[0] = *(rte_pktmbuf_mtod(pkt, uint16_t *)); - ctrl->imm = *(rte_pktmbuf_mtod_offset(pkt, uint32_t *, - sizeof(uint16_t))); - } else { - ctrl->imm = 0; - } - ctrl->srcrb_flags = srcrb.flags; - /* - * Make sure descriptor is fully written before - * setting ownership bit (because HW can start - * executing as soon as we do). - */ - rte_wmb(); - ctrl->owner_opcode = rte_cpu_to_be_32(owner_opcode | - ((sq->head & sq->txbb_cnt) ? - MLX4_BIT_WQE_OWN : 0)); - sq->head += nr_txbbs; - return 0; -} - -/** * DPDK callback for Tx. * * @param dpdk_txq @@ -439,7 +260,8 @@ struct pv { unsigned int bytes_sent = 0; unsigned int i; unsigned int max; - int err; + struct mlx4_sq *sq = &txq->msq; + struct pv *pv = (struct pv *)txq->bounce_buf; assert(txq->elts_comp_cd != 0); mlx4_txq_complete(txq); @@ -460,6 +282,21 @@ struct pv { (((elts_head + 1) == elts_n) ? 0 : elts_head + 1); struct txq_elt *elt_next = &(*txq->elts)[elts_head_next]; struct txq_elt *elt = &(*txq->elts)[elts_head]; + uint32_t owner_opcode = MLX4_OPCODE_SEND; + struct mlx4_wqe_ctrl_seg *ctrl; + struct mlx4_wqe_data_seg *dseg; + struct rte_mbuf *sbuf; + union { + uint32_t flags; + uint16_t flags16[2]; + } srcrb; + uint32_t head_idx = sq->head & sq->txbb_cnt_mask; + uint32_t lkey; + uintptr_t addr; + uint32_t byte_count; + int wqe_real_size; + int nr_txbbs; + int pv_counter = 0; /* Clean up old buffer. */ if (likely(elt->buf != NULL)) { @@ -478,18 +315,166 @@ struct pv { } while (tmp != NULL); } RTE_MBUF_PREFETCH_TO_FREE(elt_next->buf); - /* Post the packet for sending. */ - err = mlx4_post_send(txq, buf); - if (unlikely(err)) { + /* + * Calculate the needed work queue entry size + * for this packet. + */ + wqe_real_size = sizeof(struct mlx4_wqe_ctrl_seg) + + buf->nb_segs * sizeof(struct mlx4_wqe_data_seg); + nr_txbbs = MLX4_SIZE_TO_TXBBS(wqe_real_size); + /* + * Check that there is room for this WQE in the send + * queue and that the WQE size is legal. + */ + if (((sq->head - sq->tail) + nr_txbbs + + sq->headroom_txbbs) >= sq->txbb_cnt || + nr_txbbs > MLX4_MAX_WQE_TXBBS) { elt->buf = NULL; - goto stop; + break; } + /* Get the control and data entries of the WQE. */ + ctrl = (struct mlx4_wqe_ctrl_seg *) + mlx4_get_send_wqe(sq, head_idx); + dseg = (struct mlx4_wqe_data_seg *)((uintptr_t)ctrl + + sizeof(struct mlx4_wqe_ctrl_seg)); + /* Fill the data segments with buffer information. */ + for (sbuf = buf; sbuf != NULL; sbuf = sbuf->next, dseg++) { + addr = rte_pktmbuf_mtod(sbuf, uintptr_t); + rte_prefetch0((volatile void *)addr); + /* Handle WQE wraparound. */ + if (dseg >= (struct mlx4_wqe_data_seg *)sq->eob) + dseg = (struct mlx4_wqe_data_seg *)sq->buf; + dseg->addr = rte_cpu_to_be_64(addr); + /* Memory region key (big endian). */ + lkey = mlx4_txq_mp2mr(txq, mlx4_txq_mb2mp(sbuf)); + dseg->lkey = rte_cpu_to_be_32(lkey); +#ifndef NDEBUG + if (unlikely(dseg->lkey == + rte_cpu_to_be_32((uint32_t)-1))) { + /* MR does not exist. */ + DEBUG("%p: unable to get MP <-> MR association", + (void *)txq); + /* + * Restamp entry in case of failure. + * Make sure that size is written correctly + * Note that we give ownership to the SW, + * not the HW. + */ + ctrl->fence_size = (wqe_real_size >> 4) & 0x3f; + mlx4_txq_stamp_freed_wqe(sq, head_idx, + (sq->head & sq->txbb_cnt) ? 0 : 1); + elt->buf = NULL; + break; + } +#endif /* NDEBUG */ + if (likely(sbuf->data_len)) { + byte_count = rte_cpu_to_be_32(sbuf->data_len); + } else { + /* + * Zero length segment is treated as inline + * segment with zero data. + */ + byte_count = RTE_BE32(0x80000000); + } + /* + * If the data segment is not at the beginning + * of a Tx basic block (TXBB) then write the + * byte count, else postpone the writing to + * just before updating the control segment. + */ + if ((uintptr_t)dseg & (uintptr_t)(MLX4_TXBB_SIZE - 1)) { + /* + * Need a barrier here before writing the + * byte_count fields to make sure that all the + * data is visible before the byte_count field + * is set. otherwise, if the segment begins a + * new cacheline, the HCA prefetcher could grab + * the 64-byte chunk and get a valid + * (!= 0xffffffff) byte count but stale data, + * and end up sending the wrong data. + */ + rte_io_wmb(); + dseg->byte_count = byte_count; + } else { + /* + * This data segment starts at the beginning of + * a new TXBB, so we need to postpone its + * byte_count writing for later. + */ + pv[pv_counter].dseg = dseg; + pv[pv_counter++].val = byte_count; + } + } + /* Write the first DWORD of each TXBB save earlier. */ + if (pv_counter) { + /* Need a barrier before writing the byte_count. */ + rte_io_wmb(); + for (--pv_counter; pv_counter >= 0; pv_counter--) + pv[pv_counter].dseg->byte_count = + pv[pv_counter].val; + } + /* Fill the control parameters for this packet. */ + ctrl->fence_size = (wqe_real_size >> 4) & 0x3f; + /* + * For raw Ethernet, the SOLICIT flag is used to indicate + * that no ICRC should be calculated. + */ + txq->elts_comp_cd -= nr_txbbs; + if (unlikely(txq->elts_comp_cd <= 0)) { + txq->elts_comp_cd = txq->elts_comp_cd_init; + srcrb.flags = RTE_BE32(MLX4_WQE_CTRL_SOLICIT | + MLX4_WQE_CTRL_CQ_UPDATE); + } else { + srcrb.flags = RTE_BE32(MLX4_WQE_CTRL_SOLICIT); + } + /* Enable HW checksum offload if requested */ + if (txq->csum && + (buf->ol_flags & + (PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM))) { + const uint64_t is_tunneled = (buf->ol_flags & + (PKT_TX_TUNNEL_GRE | + PKT_TX_TUNNEL_VXLAN)); + + if (is_tunneled && txq->csum_l2tun) { + owner_opcode |= MLX4_WQE_CTRL_IIP_HDR_CSUM | + MLX4_WQE_CTRL_IL4_HDR_CSUM; + if (buf->ol_flags & PKT_TX_OUTER_IP_CKSUM) + srcrb.flags |= + RTE_BE32(MLX4_WQE_CTRL_IP_HDR_CSUM); + } else { + srcrb.flags |= + RTE_BE32(MLX4_WQE_CTRL_IP_HDR_CSUM | + MLX4_WQE_CTRL_TCP_UDP_CSUM); + } + } + if (txq->lb) { + /* + * Copy destination MAC address to the WQE, this allows + * loopback in eSwitch, so that VFs and PF can + * communicate with each other. + */ + srcrb.flags16[0] = *(rte_pktmbuf_mtod(buf, uint16_t *)); + ctrl->imm = *(rte_pktmbuf_mtod_offset(buf, uint32_t *, + sizeof(uint16_t))); + } else { + ctrl->imm = 0; + } + ctrl->srcrb_flags = srcrb.flags; + /* + * Make sure descriptor is fully written before + * setting ownership bit (because HW can start + * executing as soon as we do). + */ + rte_wmb(); + ctrl->owner_opcode = rte_cpu_to_be_32(owner_opcode | + ((sq->head & sq->txbb_cnt) ? + MLX4_BIT_WQE_OWN : 0)); + sq->head += nr_txbbs; elt->buf = buf; bytes_sent += buf->pkt_len; ++elts_comp; elts_head = elts_head_next; } -stop: /* Take a shortcut if nothing must be sent. */ if (unlikely(i == 0)) return 0;