From patchwork Sun May 29 16:46:52 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob X-Patchwork-Id: 13066 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id C1EDD5A93; Sun, 29 May 2016 18:48:45 +0200 (CEST) Received: from na01-bl2-obe.outbound.protection.outlook.com (mail-bl2on0081.outbound.protection.outlook.com [65.55.169.81]) by dpdk.org (Postfix) with ESMTP id 87A4E685C for ; Sun, 29 May 2016 18:48:44 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:To:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=tGYPfAsYiKmJyGSrlbQRPD79n22Of997Kyyjf4AI9jE=; b=D3vtmyriF5tzcauydtDpMcdPvNxf00QCAMlxTim2eLmdtvxqvSELB1EkuBDGa7C0zpSadlyKg46LjnOanDdtZBBAk4fJ/NsC+/PWJCUmmUbeLO8i1MsUxRd3Q4UqwLhLphyVKcTEhJpXuRPXPNmbHVCdDgYlxt2pU/fKF6S+pSI= Authentication-Results: dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=none action=none header.from=caviumnetworks.com; Received: from localhost.localdomain.localdomain (122.167.187.184) by CY1PR0701MB1725.namprd07.prod.outlook.com (10.163.21.14) with Microsoft SMTP Server (TLS) id 15.1.506.9; Sun, 29 May 2016 16:48:37 +0000 From: Jerin Jacob To: CC: , , , Jerin Jacob , Maciej Czekaj , Kamil Rytarowski , Zyta Szpak , Slawomir Rosek , Radoslaw Biernacki Date: Sun, 29 May 2016 22:16:52 +0530 Message-ID: <1464540424-12631-9-git-send-email-jerin.jacob@caviumnetworks.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1464540424-12631-1-git-send-email-jerin.jacob@caviumnetworks.com> References: <1462634198-2289-1-git-send-email-jerin.jacob@caviumnetworks.com> <1464540424-12631-1-git-send-email-jerin.jacob@caviumnetworks.com> MIME-Version: 1.0 X-Originating-IP: [122.167.187.184] X-ClientProxiedBy: BM1PR01CA0003.INDPRD01.PROD.OUTLOOK.COM (10.163.198.138) To CY1PR0701MB1725.namprd07.prod.outlook.com (10.163.21.14) X-MS-Office365-Filtering-Correlation-Id: 23f34c78-ce00-4767-3a6a-08d387e11714 X-Microsoft-Exchange-Diagnostics: 1; CY1PR0701MB1725; 2:XrpAuSq/FvUpp+TZHuLYwioRhUq1jBpS2tvMOXZNg/r5WotixCKufM9m0d30YcKZOjNkfxzEfv79Qx1LXFEoNZ7G5Q3lGeKpx+9sq1trKISEEhuhWS6GcxUrOTd3/hik7Fj2hdTUPTSZm49qipHOQAEL6opE1TWTypcvjDy9aNzbm0WoIn0cZWZOzN9yEcGM; 3:ZZO2Dn7Iz63GckGzCSogpTLzGwfmCUUDbpfyqG+yCqjZv5ztnt9L7ZXRCVXYBXadXE0exT5/w85CsbNrLiGYG0fbCdnJg2KQl2Cdn8c8CRsa5AYEbZfg2zyll1YVWWnQ; 25:g7c3uEWW+DzeQAWgGm6NCy+iULc69nf4snNpkiAgDhMcrAx6qFvOgWW4Dv0eAX9MkTB12h3SDKak7o+DtU6ubCnDWgHWNQWH/r8HsIzLUJcs9WgJPNhrYcA/JAYTiTMhfq81ZE6bPHfrcH4+lMyZg6ypHovj27wGAiM4aZbvamx1GYQJR4264exU2hvfj/fFZZHQpI7WSkXV93sa2wpr41sHT4wKiW4Q1pKR78OU95g/QCox2XRIHQYH1k9Ze6Vz/YCtKeW4+sFE+uQhYOD5OFvE9YTMU8X8+Oll0byfA36kOSPRbPRlQ5gnVHLq62Br8Kp64xdX885OX+iHski4aBdfLfck2s1ksibAbAfcQSLHcidhpcHdN6gKyEoozPUQebJeWDHmibs24AKyTF8/rektArwyqg6GNiAkKCI0T5g= X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:CY1PR0701MB1725; X-Microsoft-Exchange-Diagnostics: 1; CY1PR0701MB1725; 20:5egg6RNqWBKcApsY3iDG2VvOMVL7qlQrRddsInrD7f1kDlHNR/fewfe1nJvrF40iY1KhZJ9AwyLsf3w5LiiX881/RpfDizqQc0Op9vb6BIM5iLLUm5rm16aQxghd9L+40NxnqzYA/kGeUclhQY5hXKhzbwVL7Y8QUvXF1ArRbq6Sk8jKh7wLhoFRDQ8guPPbvfEMxcBg0zjEOMR2iKxVGLzynOVHMxzXdo0OHpJno8doIHcS8qH/sdG52Utk5exQkYh+hPkAB5sAwmVQ3+fxBha/S++TPY7yhB4sa1mrG0QvxClSEsnUOS/yrbnf7JOsMQhXGK9k2EoLwfRAT0n9WDAe+OXk0GhTF9ZUfPP9OcDWp6nlXbYcTe7OZePK4NtmOddv6go9O37ERTj4Drjb9gqVpKLR2wXk0ejaEnnGKCVfuKUseNtOMLbiZ6Le5YUBYN/jYtknRufEbDI1TjrG3L4wZgObV/eNfBIcisu9dT3ZEbFHcj5pVYtc7GiD4zEzRxDPTeNnkoUxXSd7QwHn2rRrn4z4zl1NYU714Ut4NM9WKHFq0adNG4JJYW/kBDpDhIzQZk03DNgaszhBEtgI7IO/fIZ5yNA4qG9MRSdK12E= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(601004)(2401047)(5005006)(8121501046)(10201501046)(3002001); SRVR:CY1PR0701MB1725; BCL:0; PCL:0; RULEID:; SRVR:CY1PR0701MB1725; X-Microsoft-Exchange-Diagnostics: 1; CY1PR0701MB1725; 4:dfMQ85kNsfHALI1OIS6hl1hrckALyMamNXi7HzDh/+TbZIa5HIhbqDaTfbYuhP8Nb33FvDBg+1o90qd506muWxfPH2apt7qJqZusKjrAiowi9/rVBqbr3R+nrunPapv2kbX2+87Ay8rDxg7E7YABoIBwUTxoZwjiRURUazJlL+pJLhsuWXJ5Ae4cdsMVPHbLkr5nMEj5YCH7ltHc2X5B2BAMgrm0wo30fmwY/SkWh+GxQON03bPJcC2HV23rTQrZoKuu3x3Dji8SHtYtIsYc+K4hzKnxQscBnldjZ4lzpGEDhw38uRWnQajvMROTSGPjG5G4IzeNHBgzvaWu9nWppOpXCegws7EWGsqWoaRDFOtdRPChW6A5nQKC3y78HP3P X-Forefront-PRVS: 0957AD37A0 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6069001)(6009001)(4326007)(92566002)(189998001)(2351001)(81166006)(33646002)(76176999)(229853001)(2950100001)(50986999)(19580405001)(19580395003)(36756003)(50466002)(110136002)(8676002)(2906002)(586003)(3846002)(77096005)(5004730100002)(5003940100001)(50226002)(66066001)(47776003)(48376002)(42186005)(5008740100001)(6116002)(7099028); DIR:OUT; SFP:1101; SCL:1; SRVR:CY1PR0701MB1725; H:localhost.localdomain.localdomain; FPR:; SPF:None; MLV:sfv; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; CY1PR0701MB1725; 23:cNESou4V79lOav6bW6e3km8IoOA2cIlhOZ6u28Pw6KtTcZFg4lzkFG14rcTa2LMJp/ZfbBFFVJdDdoyFSVYl+6qBqJdOig8M4pHRsPBNUKLKDkNz14D/jPZLd1jRKZSpK6Oy29LKEph8xrn+eZLQcNS8UG+Xi+xMXSwwnokp8B+lOcvE/THXhaAffSa2YVJbPkbp3d8ijFMk9xICJE+ryh3FkT5PNZQN3iRc9VEXwcf3U3ve4xTOOY/hlnPw3+QyjczmC1VhKg8FdSeQCuFRZf3pEV5hcxVxlyvXg+k9Av56lXzSnoUlKVkedCzKHWQVMjuoeuapn6mEUzwXkuWXG7SrrmdpHcSEKtG6hUYIUziw1k7tToQBGOiiHME4od/ANo3Uy8U8znzqV3NKwm73PhUZ9PInfkli2OOUCrh0zAlmtdnxCQz1gJEbringhijXIGJZetdhDUuvoyPo4A4c/kSMzCTYO/7GykXic273blG3j1Qu1eo5pB5OYprIKG/rEgrhn8djDdBpB3XB13fjjJ60vjgGDD7UDbfRb0kp5ZLtu7vo9xGqz9ex6JoandgmkpjJTekMP5E6IN+yCxvJtN+igSMbBxvIVuXfOoZywlv5NGj2xQV8OtZBZMNldfVPr0qfkhu9z9O40zD6LuJMJqZtWiudGHSsvzCaUSjtCwSEZFTXJQhYBH43BeXVs5LQddbKRqikbyw6iws2IDFBn6ZUGxlQBDNfF5MXes/2Ee2kDRxygR6hSFlsdExBNQV79wyAOHUUuFK7jVV3XAKvqU9jwNjfagHUtxXrAenDAwn9B16JyLbAunR/Vso2A/cp7AwDNnMFPbazaCll3XKW6Pt4MWgS62EBov7GPGxCzoyVMJkcVlpJSM72/jVuCv1J X-Microsoft-Exchange-Diagnostics: 1; CY1PR0701MB1725; 5:0gQEXTLpNwtu+3VcF799f9rVnAZAIGkg8BKMKj6+AJagy+RYn+28WJAbF79vQ5Jd6Rd+2bx7d0NUt2lRMtfz/4qZd4pQpEq8FhVl8OchJ9VWIWR3D0DGcUyAAJHPp8lmGOoPGlADLJllVq3fGts+Dg==; 24:NkUxPx7zBq+O3Uykk1D+r4QYXwHacG2DWNzqHijm8V1TFdmHG9iEQVkS4GeiQx3MLmtlrBwXZMfv8pSddgvzLYwsbUJVVauU7K31K78A+mI=; 7:Pf126n1bE4V/8LGs2fN+/ZyyYg3cfSA1Djc1lkDBBqyqKz/lKbWD37bDFwsu7MAdibJIiMAqWwA4tA6MsKECeOeKug4NEnrrSzfzmpDtD3Y86fNFE3bhuAPXvKn1MKOeFEJ+zpvUC4/Ux4aJ81/NzzyGtzrS719sbrCecO2Cs0drPPWP6xJyn2InfkLgVLMW SpamDiagnosticOutput: 1:23 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 May 2016 16:48:37.8182 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY1PR0701MB1725 Subject: [dpdk-dev] [PATCH v2 08/20] thunderx/nicvf: add tx_queue_setup/release support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Signed-off-by: Jerin Jacob Signed-off-by: Maciej Czekaj Signed-off-by: Kamil Rytarowski Signed-off-by: Zyta Szpak Signed-off-by: Slawomir Rosek Signed-off-by: Radoslaw Biernacki --- drivers/net/thunderx/nicvf_ethdev.c | 179 ++++++++++++++++++++++++++++++++++++ 1 file changed, 179 insertions(+) diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c index 8fa3256..3b7cdde 100644 --- a/drivers/net/thunderx/nicvf_ethdev.c +++ b/drivers/net/thunderx/nicvf_ethdev.c @@ -78,6 +78,10 @@ static int nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp); static void nicvf_dev_rx_queue_release(void *rx_queue); +static int nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_txconf *tx_conf); +static void nicvf_dev_tx_queue_release(void *sq); static int nicvf_dev_get_reg_length(struct rte_eth_dev *dev); static int nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs); @@ -226,6 +230,179 @@ nicvf_qset_cq_alloc(struct nicvf *nic, struct nicvf_rxq *rxq, uint16_t qidx, return 0; } +static int +nicvf_qset_sq_alloc(struct nicvf *nic, struct nicvf_txq *sq, uint16_t qidx, + uint32_t desc_cnt) +{ + const struct rte_memzone *rz; + uint32_t ring_size = desc_cnt * sizeof(union sq_entry_t); + + rz = rte_eth_dma_zone_reserve(nic->eth_dev, "sq", qidx, ring_size, + NICVF_SQ_BASE_ALIGN_BYTES, nic->node); + if (rz == NULL) { + PMD_INIT_LOG(ERR, "Failed allocate mem for sq hw ring"); + return -ENOMEM; + } + + memset(rz->addr, 0, ring_size); + + sq->phys = rz->phys_addr; + sq->desc = rz->addr; + sq->qlen_mask = desc_cnt - 1; + + return 0; +} + +static inline void +nicvf_tx_queue_release_mbufs(struct nicvf_txq *txq) +{ + uint32_t head; + + head = txq->head; + while (head != txq->tail) { + if (txq->txbuffs[head]) { + rte_pktmbuf_free_seg(txq->txbuffs[head]); + txq->txbuffs[head] = NULL; + } + head++; + head = head & txq->qlen_mask; + } +} + +static void +nicvf_tx_queue_reset(struct nicvf_txq *txq) +{ + uint32_t txq_desc_cnt = txq->qlen_mask + 1; + + memset(txq->desc, 0, sizeof(union sq_entry_t) * txq_desc_cnt); + memset(txq->txbuffs, 0, sizeof(struct rte_mbuf *) * txq_desc_cnt); + txq->tail = 0; + txq->head = 0; + txq->xmit_bufs = 0; +} + +static void +nicvf_dev_tx_queue_release(void *sq) +{ + struct nicvf_txq *txq; + + PMD_INIT_FUNC_TRACE(); + + txq = (struct nicvf_txq *)sq; + if (txq) { + if (txq->txbuffs != NULL) { + nicvf_tx_queue_release_mbufs(txq); + rte_free(txq->txbuffs); + txq->txbuffs = NULL; + } + rte_free(txq); + } +} + +static int +nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_txconf *tx_conf) +{ + uint16_t tx_free_thresh; + uint8_t is_single_pool; + struct nicvf_txq *txq; + struct nicvf *nic = nicvf_pmd_priv(dev); + + PMD_INIT_FUNC_TRACE(); + + /* Socket id check */ + if (socket_id != (unsigned int)SOCKET_ID_ANY && socket_id != nic->node) + PMD_DRV_LOG(WARNING, "socket_id expected %d, configured %d", + socket_id, nic->node); + + /* Tx deferred start is not supported */ + if (tx_conf->tx_deferred_start) { + PMD_INIT_LOG(ERR, "Tx deferred start not supported"); + return -EINVAL; + } + + /* Roundup nb_desc to avilable qsize and validate max number of desc */ + nb_desc = nicvf_qsize_sq_roundup(nb_desc); + if (nb_desc == 0) { + PMD_INIT_LOG(ERR, "Value of nb_desc beyond available sq qsize"); + return -EINVAL; + } + + /* Validate tx_free_thresh */ + tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ? + tx_conf->tx_free_thresh : + NICVF_DEFAULT_TX_FREE_THRESH); + + if (tx_free_thresh > (nb_desc) || + tx_free_thresh > NICVF_MAX_TX_FREE_THRESH) { + PMD_INIT_LOG(ERR, + "tx_free_thresh must be less than the number of TX " + "descriptors. (tx_free_thresh=%u port=%d " + "queue=%d)", (unsigned int)tx_free_thresh, + (int)dev->data->port_id, (int)qidx); + return -EINVAL; + } + + /* Free memory prior to re-allocation if needed. */ + if (dev->data->tx_queues[qidx] != NULL) { + PMD_TX_LOG(DEBUG, "Freeing memory prior to re-allocation %d", + qidx); + nicvf_dev_tx_queue_release(dev->data->tx_queues[qidx]); + dev->data->tx_queues[qidx] = NULL; + } + + /* Allocating tx queue data structure */ + txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct nicvf_txq), + RTE_CACHE_LINE_SIZE, nic->node); + if (txq == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate txq=%d", qidx); + return -ENOMEM; + } + + txq->nic = nic; + txq->queue_id = qidx; + txq->tx_free_thresh = tx_free_thresh; + txq->txq_flags = tx_conf->txq_flags; + txq->sq_head = nicvf_qset_base(nic, qidx) + NIC_QSET_SQ_0_7_HEAD; + txq->sq_door = nicvf_qset_base(nic, qidx) + NIC_QSET_SQ_0_7_DOOR; + is_single_pool = (txq->txq_flags & ETH_TXQ_FLAGS_NOREFCOUNT && + txq->txq_flags & ETH_TXQ_FLAGS_NOMULTMEMP); + + /* Choose optimum free threshold value for multipool case */ + if (!is_single_pool) { + txq->tx_free_thresh = (uint16_t) + (tx_conf->tx_free_thresh == NICVF_DEFAULT_TX_FREE_THRESH ? + NICVF_TX_FREE_MPOOL_THRESH : + tx_conf->tx_free_thresh); + } + + /* Allocate software ring */ + txq->txbuffs = rte_zmalloc_socket("txq->txbuffs", + nb_desc * sizeof(struct rte_mbuf *), + RTE_CACHE_LINE_SIZE, nic->node); + + if (txq->txbuffs == NULL) { + nicvf_dev_tx_queue_release(txq); + return -ENOMEM; + } + + if (nicvf_qset_sq_alloc(nic, txq, qidx, nb_desc)) { + PMD_INIT_LOG(ERR, "Failed to allocate mem for sq %d", qidx); + nicvf_dev_tx_queue_release(txq); + return -ENOMEM; + } + + nicvf_tx_queue_reset(txq); + + PMD_TX_LOG(DEBUG, "[%d] txq=%p nb_desc=%d desc=%p phys=0x%" PRIx64, + qidx, txq, nb_desc, txq->desc, txq->phys); + + dev->data->tx_queues[qidx] = txq; + dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED; + return 0; +} + static void nicvf_rx_queue_reset(struct nicvf_rxq *rxq) { @@ -465,6 +642,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = { .dev_infos_get = nicvf_dev_info_get, .rx_queue_setup = nicvf_dev_rx_queue_setup, .rx_queue_release = nicvf_dev_rx_queue_release, + .tx_queue_setup = nicvf_dev_tx_queue_setup, + .tx_queue_release = nicvf_dev_tx_queue_release, .get_reg_length = nicvf_dev_get_reg_length, .get_reg = nicvf_dev_get_regs, };