From patchwork Fri Apr 12 12:29:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gagandeep Singh X-Patchwork-Id: 52708 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2BF8E1B212; Fri, 12 Apr 2019 14:29:42 +0200 (CEST) Received: from EUR03-VE1-obe.outbound.protection.outlook.com (mail-eopbgr50059.outbound.protection.outlook.com [40.107.5.59]) by dpdk.org (Postfix) with ESMTP id 756371B134 for ; Fri, 12 Apr 2019 14:29:08 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=sVKYONk6RXlKxCWdkIvvKmwJUFyrJr2rhwxDgn1AjZM=; b=Hiqv3pnvovDK/PcJXFidGY+lpMFYyEYiukFFjWs7CTagPiY4OuyId+ZF5tF+HLBOhm5tz+CGCoYxqyMWE9b4vGaIIzOkc8SyYdgC2cmsINvN3K53dIHd1nHbffdQISwIBHv0Hac0JpWhLHlJyBVmpRdZcyaDjl6VxD5xR+SFyfY= Received: from VE1PR04MB6365.eurprd04.prod.outlook.com (10.255.118.78) by VE1PR04MB6367.eurprd04.prod.outlook.com (10.255.118.80) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1792.15; Fri, 12 Apr 2019 12:29:06 +0000 Received: from VE1PR04MB6365.eurprd04.prod.outlook.com ([fe80::f5ad:f178:4c55:13e0]) by VE1PR04MB6365.eurprd04.prod.outlook.com ([fe80::f5ad:f178:4c55:13e0%3]) with mapi id 15.20.1792.016; Fri, 12 Apr 2019 12:29:06 +0000 From: Gagandeep Singh To: "dev@dpdk.org" , "ferruh.yigit@intel.com" CC: Gagandeep Singh Thread-Topic: [PATCH v4 10/13] net/enetc: enable Rx-Tx queue start/stop feature Thread-Index: AQHU8StS8zNSSnLTbEqUkhITCItBbg== Date: Fri, 12 Apr 2019 12:29:06 +0000 Message-ID: <20190412122840.1908-11-g.singh@nxp.com> References: <20190412105105.24351-1-g.singh@nxp.com> <20190412122840.1908-1-g.singh@nxp.com> In-Reply-To: <20190412122840.1908-1-g.singh@nxp.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: BMXPR01CA0029.INDPRD01.PROD.OUTLOOK.COM (2603:1096:b00:c::15) To VE1PR04MB6365.eurprd04.prod.outlook.com (2603:10a6:803:12a::14) x-mailer: git-send-email 2.19.1 authentication-results: spf=none (sender IP is ) smtp.mailfrom=G.Singh@nxp.com; x-ms-exchange-messagesentrepresentingtype: 1 x-originating-ip: [92.120.0.8] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: edab8a4b-204d-4cd1-27a2-08d6bf42749d x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600139)(711020)(4605104)(4618075)(2017052603328)(7193020); SRVR:VE1PR04MB6367; x-ms-traffictypediagnostic: VE1PR04MB6367: x-microsoft-antispam-prvs: x-forefront-prvs: 0005B05917 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(376002)(346002)(396003)(39860400002)(366004)(136003)(189003)(199004)(476003)(99286004)(1076003)(8936002)(2501003)(5660300002)(186003)(6486002)(97736004)(3846002)(6436002)(106356001)(6116002)(14444005)(7736002)(6506007)(102836004)(386003)(305945005)(256004)(446003)(76176011)(2616005)(105586002)(26005)(71190400001)(53936002)(36756003)(66066001)(25786009)(8676002)(486006)(11346002)(14454004)(50226002)(71200400001)(81156014)(6512007)(86362001)(2906002)(316002)(478600001)(110136005)(72206003)(4326008)(52116002)(68736007)(81166006); DIR:OUT; SFP:1101; SCL:1; SRVR:VE1PR04MB6367; H:VE1PR04MB6365.eurprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: nxp.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: mgADyYCPxOaPtF6LxLUG4uGpVs3ApD9UjXMfPt93wZf4OimUMDmzhLR8B804gK8Fhj5DXufFqIRlh2WeQ8IBhacASY7Wjv6OkS8JH3HKp2b527hQQ39AFcaUXxzswzrVD8EDqWgXg49mpqvwK/O+mqmy0VlYJjeca0ZHpqH1+Jm0z248CpMDNYhEWj32edS1l+bYdoamD391j6BJ5pPeBCzfoBg13cZ1kwRAvDS0xzkhrOrSUlrTunrTfT6Uqt+L8ws3DvciEcEdlS7tC6xNwWpar34uwZiwj8pV66c1APS2wGz6lFuxmVupqVP18hriTTKuGOpSITnpYI2x8BXyzwqtcWMTp3NoXoDQ8/sr4QmLVagGDm67tVrbTRcw3GifvijcFpTZX5JzVSIRONpbxLwdM+F4GLlwsaSSdlzZf4U= MIME-Version: 1.0 X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: edab8a4b-204d-4cd1-27a2-08d6bf42749d X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Apr 2019 12:29:06.7969 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6367 Subject: [dpdk-dev] [PATCH v4 10/13] net/enetc: enable Rx-Tx queue start/stop feature X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Rx and Tx queue start-stop and deferred queue start features enabled. Signed-off-by: Gagandeep Singh --- doc/guides/nics/enetc.rst | 2 + doc/guides/nics/features/enetc.ini | 1 + drivers/net/enetc/enetc_ethdev.c | 185 ++++++++++++++++++++--------- 3 files changed, 134 insertions(+), 54 deletions(-) diff --git a/doc/guides/nics/enetc.rst b/doc/guides/nics/enetc.rst index eeb07523d..26d61f67d 100644 --- a/doc/guides/nics/enetc.rst +++ b/doc/guides/nics/enetc.rst @@ -50,6 +50,8 @@ ENETC Features - Promiscuous - Multicast - Jumbo packets +- Queue Start/Stop +- Deferred Queue Start NIC Driver (PMD) ~~~~~~~~~~~~~~~~ diff --git a/doc/guides/nics/features/enetc.ini b/doc/guides/nics/features/enetc.ini index 0eed2cb9b..bd901faf4 100644 --- a/doc/guides/nics/features/enetc.ini +++ b/doc/guides/nics/features/enetc.ini @@ -11,6 +11,7 @@ Promiscuous mode = Y Allmulticast mode = Y MTU update = Y Jumbo frame = Y +Queue start/stop = Y Linux VFIO = Y ARMv8 = Y Usage doc = Y diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c index 66cbf74d0..ff9301e01 100644 --- a/drivers/net/enetc/enetc_ethdev.c +++ b/drivers/net/enetc/enetc_ethdev.c @@ -203,7 +203,6 @@ static void enetc_setup_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring) { int idx = tx_ring->index; - uint32_t tbmr; phys_addr_t bd_address; bd_address = (phys_addr_t) @@ -215,9 +214,6 @@ enetc_setup_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring) enetc_txbdr_wr(hw, idx, ENETC_TBLENR, ENETC_RTBLENR_LEN(tx_ring->bd_count)); - tbmr = ENETC_TBMR_EN; - /* enable ring */ - enetc_txbdr_wr(hw, idx, ENETC_TBMR, tbmr); enetc_txbdr_wr(hw, idx, ENETC_TBCIR, 0); enetc_txbdr_wr(hw, idx, ENETC_TBCISR, 0); tx_ring->tcir = (void *)((size_t)hw->reg + @@ -227,16 +223,22 @@ enetc_setup_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring) } static int -enetc_alloc_tx_resources(struct rte_eth_dev *dev, - uint16_t queue_idx, - uint16_t nb_desc) +enetc_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id __rte_unused, + const struct rte_eth_txconf *tx_conf) { - int err; + int err = 0; struct enetc_bdr *tx_ring; struct rte_eth_dev_data *data = dev->data; struct enetc_eth_adapter *priv = ENETC_DEV_PRIVATE(data->dev_private); + PMD_INIT_FUNC_TRACE(); + if (nb_desc > MAX_BD_COUNT) + return -1; + tx_ring = rte_zmalloc(NULL, sizeof(struct enetc_bdr), 0); if (tx_ring == NULL) { ENETC_PMD_ERR("Failed to allocate TX ring memory"); @@ -253,6 +255,17 @@ enetc_alloc_tx_resources(struct rte_eth_dev *dev, enetc_setup_txbdr(&priv->hw.hw, tx_ring); data->tx_queues[queue_idx] = tx_ring; + if (!tx_conf->tx_deferred_start) { + /* enable ring */ + enetc_txbdr_wr(&priv->hw.hw, tx_ring->index, + ENETC_TBMR, ENETC_TBMR_EN); + dev->data->tx_queue_state[tx_ring->index] = + RTE_ETH_QUEUE_STATE_STARTED; + } else { + dev->data->tx_queue_state[tx_ring->index] = + RTE_ETH_QUEUE_STATE_STOPPED; + } + return 0; fail: rte_free(tx_ring); @@ -260,24 +273,6 @@ enetc_alloc_tx_resources(struct rte_eth_dev *dev, return err; } -static int -enetc_tx_queue_setup(struct rte_eth_dev *dev, - uint16_t queue_idx, - uint16_t nb_desc, - unsigned int socket_id __rte_unused, - const struct rte_eth_txconf *tx_conf __rte_unused) -{ - int err = 0; - - PMD_INIT_FUNC_TRACE(); - if (nb_desc > MAX_BD_COUNT) - return -1; - - err = enetc_alloc_tx_resources(dev, queue_idx, nb_desc); - - return err; -} - static void enetc_tx_queue_release(void *txq) { @@ -367,23 +362,27 @@ enetc_setup_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring, buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rx_ring->mb_pool) - RTE_PKTMBUF_HEADROOM); enetc_rxbdr_wr(hw, idx, ENETC_RBBSR, buf_size); - /* enable ring */ - enetc_rxbdr_wr(hw, idx, ENETC_RBMR, ENETC_RBMR_EN); enetc_rxbdr_wr(hw, idx, ENETC_RBPIR, 0); } static int -enetc_alloc_rx_resources(struct rte_eth_dev *dev, - uint16_t rx_queue_id, - uint16_t nb_rx_desc, - struct rte_mempool *mb_pool) +enetc_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t rx_queue_id, + uint16_t nb_rx_desc, + unsigned int socket_id __rte_unused, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mb_pool) { - int err; + int err = 0; struct enetc_bdr *rx_ring; struct rte_eth_dev_data *data = dev->data; struct enetc_eth_adapter *adapter = ENETC_DEV_PRIVATE(data->dev_private); + PMD_INIT_FUNC_TRACE(); + if (nb_rx_desc > MAX_BD_COUNT) + return -1; + rx_ring = rte_zmalloc(NULL, sizeof(struct enetc_bdr), 0); if (rx_ring == NULL) { ENETC_PMD_ERR("Failed to allocate RX ring memory"); @@ -400,6 +399,17 @@ enetc_alloc_rx_resources(struct rte_eth_dev *dev, enetc_setup_rxbdr(&adapter->hw.hw, rx_ring, mb_pool); data->rx_queues[rx_queue_id] = rx_ring; + if (!rx_conf->rx_deferred_start) { + /* enable ring */ + enetc_rxbdr_wr(&adapter->hw.hw, rx_ring->index, ENETC_RBMR, + ENETC_RBMR_EN); + dev->data->rx_queue_state[rx_ring->index] = + RTE_ETH_QUEUE_STATE_STARTED; + } else { + dev->data->rx_queue_state[rx_ring->index] = + RTE_ETH_QUEUE_STATE_STOPPED; + } + return 0; fail: rte_free(rx_ring); @@ -407,27 +417,6 @@ enetc_alloc_rx_resources(struct rte_eth_dev *dev, return err; } -static int -enetc_rx_queue_setup(struct rte_eth_dev *dev, - uint16_t rx_queue_id, - uint16_t nb_rx_desc, - unsigned int socket_id __rte_unused, - const struct rte_eth_rxconf *rx_conf __rte_unused, - struct rte_mempool *mb_pool) -{ - int err = 0; - - PMD_INIT_FUNC_TRACE(); - if (nb_rx_desc > MAX_BD_COUNT) - return -1; - - err = enetc_alloc_rx_resources(dev, rx_queue_id, - nb_rx_desc, - mb_pool); - - return err; -} - static void enetc_rx_queue_release(void *rxq) { @@ -661,6 +650,90 @@ enetc_dev_configure(struct rte_eth_dev *dev) return 0; } +static int +enetc_rx_queue_start(struct rte_eth_dev *dev, uint16_t qidx) +{ + struct enetc_eth_adapter *priv = + ENETC_DEV_PRIVATE(dev->data->dev_private); + struct enetc_bdr *rx_ring; + uint32_t rx_data; + + rx_ring = dev->data->rx_queues[qidx]; + if (dev->data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED) { + rx_data = enetc_rxbdr_rd(&priv->hw.hw, rx_ring->index, + ENETC_RBMR); + rx_data = rx_data | ENETC_RBMR_EN; + enetc_rxbdr_wr(&priv->hw.hw, rx_ring->index, ENETC_RBMR, + rx_data); + dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED; + } + + return 0; +} + +static int +enetc_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx) +{ + struct enetc_eth_adapter *priv = + ENETC_DEV_PRIVATE(dev->data->dev_private); + struct enetc_bdr *rx_ring; + uint32_t rx_data; + + rx_ring = dev->data->rx_queues[qidx]; + if (dev->data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED) { + rx_data = enetc_rxbdr_rd(&priv->hw.hw, rx_ring->index, + ENETC_RBMR); + rx_data = rx_data & (~ENETC_RBMR_EN); + enetc_rxbdr_wr(&priv->hw.hw, rx_ring->index, ENETC_RBMR, + rx_data); + dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED; + } + + return 0; +} + +static int +enetc_tx_queue_start(struct rte_eth_dev *dev, uint16_t qidx) +{ + struct enetc_eth_adapter *priv = + ENETC_DEV_PRIVATE(dev->data->dev_private); + struct enetc_bdr *tx_ring; + uint32_t tx_data; + + tx_ring = dev->data->tx_queues[qidx]; + if (dev->data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED) { + tx_data = enetc_txbdr_rd(&priv->hw.hw, tx_ring->index, + ENETC_TBMR); + tx_data = tx_data | ENETC_TBMR_EN; + enetc_txbdr_wr(&priv->hw.hw, tx_ring->index, ENETC_TBMR, + tx_data); + dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED; + } + + return 0; +} + +static int +enetc_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx) +{ + struct enetc_eth_adapter *priv = + ENETC_DEV_PRIVATE(dev->data->dev_private); + struct enetc_bdr *tx_ring; + uint32_t tx_data; + + tx_ring = dev->data->tx_queues[qidx]; + if (dev->data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED) { + tx_data = enetc_txbdr_rd(&priv->hw.hw, tx_ring->index, + ENETC_TBMR); + tx_data = tx_data & (~ENETC_TBMR_EN); + enetc_txbdr_wr(&priv->hw.hw, tx_ring->index, ENETC_TBMR, + tx_data); + dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED; + } + + return 0; +} + /* * The set of PCI devices this driver supports */ @@ -686,8 +759,12 @@ static const struct eth_dev_ops enetc_ops = { .dev_infos_get = enetc_dev_infos_get, .mtu_set = enetc_mtu_set, .rx_queue_setup = enetc_rx_queue_setup, + .rx_queue_start = enetc_rx_queue_start, + .rx_queue_stop = enetc_rx_queue_stop, .rx_queue_release = enetc_rx_queue_release, .tx_queue_setup = enetc_tx_queue_setup, + .tx_queue_start = enetc_tx_queue_start, + .tx_queue_stop = enetc_tx_queue_stop, .tx_queue_release = enetc_tx_queue_release, .dev_supported_ptypes_get = enetc_supported_ptypes_get, };