From patchwork Thu Jun 7 09:42:56 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40714 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id F1E4B1B026; Thu, 7 Jun 2018 11:43:29 +0200 (CEST) Received: from mail-lf0-f68.google.com (mail-lf0-f68.google.com [209.85.215.68]) by dpdk.org (Postfix) with ESMTP id C6F03200 for ; Thu, 7 Jun 2018 11:43:28 +0200 (CEST) Received: by mail-lf0-f68.google.com with SMTP id u4-v6so13669511lff.3 for ; Thu, 07 Jun 2018 02:43:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=zLyw8wZf2OUBvIaqmGvZntasATROHNhwrchFJRkHrQI=; b=wvhRT5YsjLCzUpc+THRpbmVRGC10gC9qDqQ3JT73Ah1siEIrHKE9A1bT5SumJzeTYe a0JZH/WWSXwKPCIPBT3ces6vqDle+qKC9B/XsGx5s65YPXAoHvd7faNR1ek4sZuFsezO IzgredGSn2NjHWIMHRwzZEeDYh0rra9iAH4suRFm/65FZ+Vi1pgXOmCZbIJbxVArQo58 RxOUwDZWbEgNYECKz8BgSrSlwDP9NJsk7Mw2y16f2cJrhRSTQtTmjwXYD6hvCe7zgQpR N3dLR6k1wauUX64/SJFHOIP11Pl3Q3jVAM+CMcrkIGqvpEBtDTul4PvMGVr1So7Uw4Ps sB7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=zLyw8wZf2OUBvIaqmGvZntasATROHNhwrchFJRkHrQI=; b=teeFJ5zTjMV/c0FW1J1cONodVNwQahbRHpa0YHWxRrDGQXvdFWR2NMlwaqGDMW29n/ K06VMxzqPW8GLN15eSLa3YIr4yVNGJxx7VzPpBi+DQ/rBKClFOEbyD94y69nn3mTN4/Z PY3puW8BemVHg3rpbZY6z0oSCeYXzPEIOjq1wCeO/R6Yutw8aQ19Dy27lJdNynaUDJUm SzSM24civNcqRAnciinO+TDzOC+rnKYnisepaZzCTc4yJBnp9fmqis4pgSsebDPzQkeZ UB8AM2YEMEXrDbHCWa/1esAoVTFTGtcqBoMZ9sAsYCuum+wtSA+k2mN5W6Gkf3bnTFnL WDDA== X-Gm-Message-State: APt69E2wM7DtXRf/EgpzZYGVoARDyCY7AAFOgnl3W+dpmwyzzoWwysUP ZJ+GadwUILNnUm1Fe4ryk5v+bg== X-Google-Smtp-Source: ADUXVKKp3e05USfgEGSxKBfFhevERM6cE2/0DHu5cAuUCrvwkwOuunBD7zmxgRU724ZhM/1rk/VP8Q== X-Received: by 2002:a19:484b:: with SMTP id v72-v6mr871407lfa.120.1528364608456; Thu, 07 Jun 2018 02:43:28 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.43.27 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:43:27 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com Date: Thu, 7 Jun 2018 11:42:56 +0200 Message-Id: <20180607094322.14312-1-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 Subject: [dpdk-dev] [PATCH v3 01/27] net/ena: change version number to 1.1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The upcoming patches for the ENA PMD are part of 1.1.0 update of the PMD and the version number is updated accordingly. Signed-off-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index c595cc7e6..190ed40d1 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -54,7 +54,7 @@ #include #define DRV_MODULE_VER_MAJOR 1 -#define DRV_MODULE_VER_MINOR 0 +#define DRV_MODULE_VER_MINOR 1 #define DRV_MODULE_VER_SUBMINOR 0 #define ENA_IO_TXQ_IDX(q) (2 * (q)) From patchwork Thu Jun 7 09:42:57 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40716 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5D6E61B169; Thu, 7 Jun 2018 11:43:34 +0200 (CEST) Received: from mail-lf0-f68.google.com (mail-lf0-f68.google.com [209.85.215.68]) by dpdk.org (Postfix) with ESMTP id 81C7D1B052 for ; Thu, 7 Jun 2018 11:43:32 +0200 (CEST) Received: by mail-lf0-f68.google.com with SMTP id 36-v6so13627512lfr.11 for ; Thu, 07 Jun 2018 02:43:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=6jF+eQuYX1CZmfQQDITq9eSFWw5c9PLyrloU8mQRIQQ=; b=DYNoU3twrixslmsViM6PK1aLldemMRpoAHIdTqcqTcIGHRetal50/JKjOLOGYA4Bwo jDpY++Wr6jhM0Pfk9+7hiuGSDOeM6CMZUMW+AvxozrKIffn7WV1ROOnGl9Y7QL+1wepL UD7g4IR9EfsUceodHiEQv1+JjlxUAZ2rdIbpBW92SteybJi1+qqCD2PAWzqXFrasGGYq 5tS9VYxt72aBJc1kQyoW3JRhYHy3PLHa2iRtonU4sIXqU+cxpW4v5Vsd3FHONSmVyIgG Csx4jSyEq4LZfEptmCZ6O35PGqnwhx6muPD/fBMUEONhmV1xidX6PhbxEv21S0xzouIE NOyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=6jF+eQuYX1CZmfQQDITq9eSFWw5c9PLyrloU8mQRIQQ=; b=EJItgyQjMOGN8ZWIGvbIs5l6DzPh2uNSHA2vcby/xqeDmgxEa9swjNKFhs/SmSdaH9 uqA+4PlBLmNtW1hchpmtSH1j12Jni9CpufNy8dL1wjQsjgTfRXXmsiDT0oKjT2K5VL/u AQJbrRFJvMoqTlEQHxxgPZx3i36AscZgAJQmt/HG4gvcY8+j+hEynnQeIG6puXghnmjQ Ny9KcUwNtBcmR9+oxqPRYMoRVCsqAOxw5CkZdEFrcqhpXv71nGuSv6Uep81EDMQ/S3CZ terlNUCinwvIwZV+LmfQEUVY7KiCJfEnFvBsg7AY5zyZXfKFqcPtS2VvpU8jtw1dvjMW Pz6Q== X-Gm-Message-State: APt69E10bpLtLxvVZTyBTEbxsBC/ycWqUEIBzGL0FAIv+gxv8sRmU7km mEpGmwrSZDgeKdjoI4/Y8rWhNw== X-Google-Smtp-Source: ADUXVKL3UD1DMVjqUzabfrHTe2ux/aUqPTgmD5XiSu9ZWnDVdwStpwHiZna9qUfyGuBR0IW+oOYpRw== X-Received: by 2002:a19:380a:: with SMTP id f10-v6mr866439lfa.47.1528364610633; Thu, 07 Jun 2018 02:43:30 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.43.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:43:29 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com, Rafal Kozik Date: Thu, 7 Jun 2018 11:42:57 +0200 Message-Id: <20180607094322.14312-2-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 02/27] net/ena: update ena_com to the newer version X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" ena_com is the HAL provided by the vendor and it shouldn't be modified by the driver developers. The PMD and platform file was adjusted for the new version of the ena_com: * Do not use deprecated meta descriptor fields * Add empty AENQ handler structure with unimplemented handlers * Add memzone allocations count to ena_ethdev.c file - it was removed from ena_com.c file * Add new macros used in new ena_com files * Use error code ENA_COM_UNSUPPORTED instead of ENA_COM_PERMISSION Signed-off-by: Michal Krawczyk Signed-off-by: Rafal Kozik --- drivers/net/ena/base/ena_com.c | 706 +++++++------- drivers/net/ena/base/ena_com.h | 112 +-- drivers/net/ena/base/ena_defs/ena_admin_defs.h | 1164 +++++++---------------- drivers/net/ena/base/ena_defs/ena_common_defs.h | 8 +- drivers/net/ena/base/ena_defs/ena_eth_io_defs.h | 758 +++++---------- drivers/net/ena/base/ena_defs/ena_gen_info.h | 4 +- drivers/net/ena/base/ena_defs/ena_includes.h | 2 - drivers/net/ena/base/ena_defs/ena_regs_defs.h | 36 + drivers/net/ena/base/ena_eth_com.c | 78 +- drivers/net/ena/base/ena_eth_com.h | 10 +- drivers/net/ena/base/ena_plat.h | 2 - drivers/net/ena/base/ena_plat_dpdk.h | 39 +- drivers/net/ena/ena_ethdev.c | 54 +- 13 files changed, 1111 insertions(+), 1862 deletions(-) diff --git a/drivers/net/ena/base/ena_com.c b/drivers/net/ena/base/ena_com.c index 38a058775..484664779 100644 --- a/drivers/net/ena/base/ena_com.c +++ b/drivers/net/ena/base/ena_com.c @@ -37,11 +37,19 @@ /*****************************************************************************/ /* Timeout in micro-sec */ -#define ADMIN_CMD_TIMEOUT_US (1000000) +#define ADMIN_CMD_TIMEOUT_US (3000000) -#define ENA_ASYNC_QUEUE_DEPTH 4 +#define ENA_ASYNC_QUEUE_DEPTH 16 #define ENA_ADMIN_QUEUE_DEPTH 32 +#ifdef ENA_EXTENDED_STATS + +#define ENA_HISTOGRAM_ACTIVE_MASK_OFFSET 0xF08 +#define ENA_EXTENDED_STAT_GET_FUNCT(_funct_queue) (_funct_queue & 0xFFFF) +#define ENA_EXTENDED_STAT_GET_QUEUE(_funct_queue) (_funct_queue >> 16) + +#endif /* ENA_EXTENDED_STATS */ + #define MIN_ENA_VER (((ENA_COMMON_SPEC_VERSION_MAJOR) << \ ENA_REGS_VERSION_MAJOR_VERSION_SHIFT) \ | (ENA_COMMON_SPEC_VERSION_MINOR)) @@ -62,7 +70,9 @@ #define ENA_MMIO_READ_TIMEOUT 0xFFFFFFFF -static int ena_alloc_cnt; +#define ENA_REGS_ADMIN_INTR_MASK 1 + +#define ENA_POLL_MS 5 /*****************************************************************************/ /*****************************************************************************/ @@ -86,6 +96,11 @@ struct ena_comp_ctx { bool occupied; }; +struct ena_com_stats_ctx { + struct ena_admin_aq_get_stats_cmd get_cmd; + struct ena_admin_acq_get_stats_resp get_resp; +}; + static inline int ena_com_mem_addr_set(struct ena_com_dev *ena_dev, struct ena_common_mem_addr *ena_addr, dma_addr_t addr) @@ -95,50 +110,49 @@ static inline int ena_com_mem_addr_set(struct ena_com_dev *ena_dev, return ENA_COM_INVAL; } - ena_addr->mem_addr_low = (u32)addr; - ena_addr->mem_addr_high = - ((addr & GENMASK_ULL(ena_dev->dma_addr_bits - 1, 32)) >> 32); + ena_addr->mem_addr_low = lower_32_bits(addr); + ena_addr->mem_addr_high = (u16)upper_32_bits(addr); return 0; } static int ena_com_admin_init_sq(struct ena_com_admin_queue *queue) { - ENA_MEM_ALLOC_COHERENT(queue->q_dmadev, - ADMIN_SQ_SIZE(queue->q_depth), - queue->sq.entries, - queue->sq.dma_addr, - queue->sq.mem_handle); + struct ena_com_admin_sq *sq = &queue->sq; + u16 size = ADMIN_SQ_SIZE(queue->q_depth); - if (!queue->sq.entries) { + ENA_MEM_ALLOC_COHERENT(queue->q_dmadev, size, sq->entries, sq->dma_addr, + sq->mem_handle); + + if (!sq->entries) { ena_trc_err("memory allocation failed"); return ENA_COM_NO_MEM; } - queue->sq.head = 0; - queue->sq.tail = 0; - queue->sq.phase = 1; + sq->head = 0; + sq->tail = 0; + sq->phase = 1; - queue->sq.db_addr = NULL; + sq->db_addr = NULL; return 0; } static int ena_com_admin_init_cq(struct ena_com_admin_queue *queue) { - ENA_MEM_ALLOC_COHERENT(queue->q_dmadev, - ADMIN_CQ_SIZE(queue->q_depth), - queue->cq.entries, - queue->cq.dma_addr, - queue->cq.mem_handle); + struct ena_com_admin_cq *cq = &queue->cq; + u16 size = ADMIN_CQ_SIZE(queue->q_depth); + + ENA_MEM_ALLOC_COHERENT(queue->q_dmadev, size, cq->entries, cq->dma_addr, + cq->mem_handle); - if (!queue->cq.entries) { + if (!cq->entries) { ena_trc_err("memory allocation failed"); return ENA_COM_NO_MEM; } - queue->cq.head = 0; - queue->cq.phase = 1; + cq->head = 0; + cq->phase = 1; return 0; } @@ -146,44 +160,44 @@ static int ena_com_admin_init_cq(struct ena_com_admin_queue *queue) static int ena_com_admin_init_aenq(struct ena_com_dev *dev, struct ena_aenq_handlers *aenq_handlers) { + struct ena_com_aenq *aenq = &dev->aenq; u32 addr_low, addr_high, aenq_caps; + u16 size; dev->aenq.q_depth = ENA_ASYNC_QUEUE_DEPTH; - ENA_MEM_ALLOC_COHERENT(dev->dmadev, - ADMIN_AENQ_SIZE(dev->aenq.q_depth), - dev->aenq.entries, - dev->aenq.dma_addr, - dev->aenq.mem_handle); + size = ADMIN_AENQ_SIZE(ENA_ASYNC_QUEUE_DEPTH); + ENA_MEM_ALLOC_COHERENT(dev->dmadev, size, + aenq->entries, + aenq->dma_addr, + aenq->mem_handle); - if (!dev->aenq.entries) { + if (!aenq->entries) { ena_trc_err("memory allocation failed"); return ENA_COM_NO_MEM; } - dev->aenq.head = dev->aenq.q_depth; - dev->aenq.phase = 1; + aenq->head = aenq->q_depth; + aenq->phase = 1; - addr_low = ENA_DMA_ADDR_TO_UINT32_LOW(dev->aenq.dma_addr); - addr_high = ENA_DMA_ADDR_TO_UINT32_HIGH(dev->aenq.dma_addr); + addr_low = ENA_DMA_ADDR_TO_UINT32_LOW(aenq->dma_addr); + addr_high = ENA_DMA_ADDR_TO_UINT32_HIGH(aenq->dma_addr); - ENA_REG_WRITE32(addr_low, (unsigned char *)dev->reg_bar - + ENA_REGS_AENQ_BASE_LO_OFF); - ENA_REG_WRITE32(addr_high, (unsigned char *)dev->reg_bar - + ENA_REGS_AENQ_BASE_HI_OFF); + ENA_REG_WRITE32(dev->bus, addr_low, dev->reg_bar + ENA_REGS_AENQ_BASE_LO_OFF); + ENA_REG_WRITE32(dev->bus, addr_high, dev->reg_bar + ENA_REGS_AENQ_BASE_HI_OFF); aenq_caps = 0; aenq_caps |= dev->aenq.q_depth & ENA_REGS_AENQ_CAPS_AENQ_DEPTH_MASK; aenq_caps |= (sizeof(struct ena_admin_aenq_entry) << ENA_REGS_AENQ_CAPS_AENQ_ENTRY_SIZE_SHIFT) & ENA_REGS_AENQ_CAPS_AENQ_ENTRY_SIZE_MASK; + ENA_REG_WRITE32(dev->bus, aenq_caps, dev->reg_bar + ENA_REGS_AENQ_CAPS_OFF); - ENA_REG_WRITE32(aenq_caps, (unsigned char *)dev->reg_bar - + ENA_REGS_AENQ_CAPS_OFF); - - if (unlikely(!aenq_handlers)) + if (unlikely(!aenq_handlers)) { ena_trc_err("aenq handlers pointer is NULL\n"); + return ENA_COM_INVAL; + } - dev->aenq.aenq_handlers = aenq_handlers; + aenq->aenq_handlers = aenq_handlers; return 0; } @@ -217,12 +231,11 @@ static struct ena_comp_ctx *get_comp_ctxt(struct ena_com_admin_queue *queue, return &queue->comp_ctx[command_id]; } -static struct ena_comp_ctx * -__ena_com_submit_admin_cmd(struct ena_com_admin_queue *admin_queue, - struct ena_admin_aq_entry *cmd, - size_t cmd_size_in_bytes, - struct ena_admin_acq_entry *comp, - size_t comp_size_in_bytes) +static struct ena_comp_ctx *__ena_com_submit_admin_cmd(struct ena_com_admin_queue *admin_queue, + struct ena_admin_aq_entry *cmd, + size_t cmd_size_in_bytes, + struct ena_admin_acq_entry *comp, + size_t comp_size_in_bytes) { struct ena_comp_ctx *comp_ctx; u16 tail_masked, cmd_id; @@ -234,12 +247,9 @@ __ena_com_submit_admin_cmd(struct ena_com_admin_queue *admin_queue, tail_masked = admin_queue->sq.tail & queue_size_mask; /* In case of queue FULL */ - cnt = admin_queue->sq.tail - admin_queue->sq.head; + cnt = ATOMIC32_READ(&admin_queue->outstanding_cmds); if (cnt >= admin_queue->q_depth) { - ena_trc_dbg("admin queue is FULL (tail %d head %d depth: %d)\n", - admin_queue->sq.tail, - admin_queue->sq.head, - admin_queue->q_depth); + ena_trc_dbg("admin queue is full.\n"); admin_queue->stats.out_of_space++; return ERR_PTR(ENA_COM_NO_SPACE); } @@ -253,6 +263,8 @@ __ena_com_submit_admin_cmd(struct ena_com_admin_queue *admin_queue, ENA_ADMIN_AQ_COMMON_DESC_COMMAND_ID_MASK; comp_ctx = get_comp_ctxt(admin_queue, cmd_id, true); + if (unlikely(!comp_ctx)) + return ERR_PTR(ENA_COM_INVAL); comp_ctx->status = ENA_CMD_SUBMITTED; comp_ctx->comp_size = (u32)comp_size_in_bytes; @@ -272,7 +284,8 @@ __ena_com_submit_admin_cmd(struct ena_com_admin_queue *admin_queue, if (unlikely((admin_queue->sq.tail & queue_size_mask) == 0)) admin_queue->sq.phase = !admin_queue->sq.phase; - ENA_REG_WRITE32(admin_queue->sq.tail, admin_queue->sq.db_addr); + ENA_REG_WRITE32(admin_queue->bus, admin_queue->sq.tail, + admin_queue->sq.db_addr); return comp_ctx; } @@ -298,12 +311,11 @@ static inline int ena_com_init_comp_ctxt(struct ena_com_admin_queue *queue) return 0; } -static struct ena_comp_ctx * -ena_com_submit_admin_cmd(struct ena_com_admin_queue *admin_queue, - struct ena_admin_aq_entry *cmd, - size_t cmd_size_in_bytes, - struct ena_admin_acq_entry *comp, - size_t comp_size_in_bytes) +static struct ena_comp_ctx *ena_com_submit_admin_cmd(struct ena_com_admin_queue *admin_queue, + struct ena_admin_aq_entry *cmd, + size_t cmd_size_in_bytes, + struct ena_admin_acq_entry *comp, + size_t comp_size_in_bytes) { unsigned long flags = 0; struct ena_comp_ctx *comp_ctx; @@ -317,7 +329,7 @@ ena_com_submit_admin_cmd(struct ena_com_admin_queue *admin_queue, cmd_size_in_bytes, comp, comp_size_in_bytes); - if (unlikely(IS_ERR(comp_ctx))) + if (IS_ERR(comp_ctx)) admin_queue->running_state = false; ENA_SPINLOCK_UNLOCK(admin_queue->q_lock, flags); @@ -331,9 +343,7 @@ static int ena_com_init_io_sq(struct ena_com_dev *ena_dev, size_t size; int dev_node = 0; - ENA_TOUCH(ctx); - - memset(&io_sq->desc_addr, 0x0, sizeof(struct ena_com_io_desc_addr)); + memset(&io_sq->desc_addr, 0x0, sizeof(io_sq->desc_addr)); io_sq->desc_entry_size = (io_sq->direction == ENA_COM_IO_QUEUE_DIRECTION_TX) ? @@ -347,23 +357,26 @@ static int ena_com_init_io_sq(struct ena_com_dev *ena_dev, size, io_sq->desc_addr.virt_addr, io_sq->desc_addr.phys_addr, + io_sq->desc_addr.mem_handle, ctx->numa_node, dev_node); - if (!io_sq->desc_addr.virt_addr) + if (!io_sq->desc_addr.virt_addr) { ENA_MEM_ALLOC_COHERENT(ena_dev->dmadev, size, io_sq->desc_addr.virt_addr, io_sq->desc_addr.phys_addr, io_sq->desc_addr.mem_handle); + } } else { ENA_MEM_ALLOC_NODE(ena_dev->dmadev, size, io_sq->desc_addr.virt_addr, ctx->numa_node, dev_node); - if (!io_sq->desc_addr.virt_addr) + if (!io_sq->desc_addr.virt_addr) { io_sq->desc_addr.virt_addr = ENA_MEM_ALLOC(ena_dev->dmadev, size); + } } if (!io_sq->desc_addr.virt_addr) { @@ -385,8 +398,7 @@ static int ena_com_init_io_cq(struct ena_com_dev *ena_dev, size_t size; int prev_node = 0; - ENA_TOUCH(ctx); - memset(&io_cq->cdesc_addr, 0x0, sizeof(struct ena_com_io_desc_addr)); + memset(&io_cq->cdesc_addr, 0x0, sizeof(io_cq->cdesc_addr)); /* Use the basic completion descriptor for Rx */ io_cq->cdesc_entry_size_in_bytes = @@ -397,17 +409,19 @@ static int ena_com_init_io_cq(struct ena_com_dev *ena_dev, size = io_cq->cdesc_entry_size_in_bytes * io_cq->q_depth; ENA_MEM_ALLOC_COHERENT_NODE(ena_dev->dmadev, - size, - io_cq->cdesc_addr.virt_addr, - io_cq->cdesc_addr.phys_addr, - ctx->numa_node, - prev_node); - if (!io_cq->cdesc_addr.virt_addr) + size, + io_cq->cdesc_addr.virt_addr, + io_cq->cdesc_addr.phys_addr, + io_cq->cdesc_addr.mem_handle, + ctx->numa_node, + prev_node); + if (!io_cq->cdesc_addr.virt_addr) { ENA_MEM_ALLOC_COHERENT(ena_dev->dmadev, size, io_cq->cdesc_addr.virt_addr, io_cq->cdesc_addr.phys_addr, io_cq->cdesc_addr.mem_handle); + } if (!io_cq->cdesc_addr.virt_addr) { ena_trc_err("memory allocation failed"); @@ -420,9 +434,8 @@ static int ena_com_init_io_cq(struct ena_com_dev *ena_dev, return 0; } -static void -ena_com_handle_single_admin_completion(struct ena_com_admin_queue *admin_queue, - struct ena_admin_acq_entry *cqe) +static void ena_com_handle_single_admin_completion(struct ena_com_admin_queue *admin_queue, + struct ena_admin_acq_entry *cqe) { struct ena_comp_ctx *comp_ctx; u16 cmd_id; @@ -447,8 +460,7 @@ ena_com_handle_single_admin_completion(struct ena_com_admin_queue *admin_queue, ENA_WAIT_EVENT_SIGNAL(comp_ctx->wait_event); } -static void -ena_com_handle_admin_completion(struct ena_com_admin_queue *admin_queue) +static void ena_com_handle_admin_completion(struct ena_com_admin_queue *admin_queue) { struct ena_admin_acq_entry *cqe = NULL; u16 comp_num = 0; @@ -499,7 +511,7 @@ static int ena_com_comp_status_to_errno(u8 comp_status) case ENA_ADMIN_RESOURCE_ALLOCATION_FAILURE: return ENA_COM_NO_MEM; case ENA_ADMIN_UNSUPPORTED_OPCODE: - return ENA_COM_PERMISSION; + return ENA_COM_UNSUPPORTED; case ENA_ADMIN_BAD_OPCODE: case ENA_ADMIN_MALFORMED_REQUEST: case ENA_ADMIN_ILLEGAL_PARAMETER: @@ -510,20 +522,24 @@ static int ena_com_comp_status_to_errno(u8 comp_status) return 0; } -static int -ena_com_wait_and_process_admin_cq_polling( - struct ena_comp_ctx *comp_ctx, - struct ena_com_admin_queue *admin_queue) +static int ena_com_wait_and_process_admin_cq_polling(struct ena_comp_ctx *comp_ctx, + struct ena_com_admin_queue *admin_queue) { unsigned long flags = 0; - u64 start_time; + unsigned long timeout; int ret; - start_time = ENA_GET_SYSTEM_USECS(); + timeout = ENA_GET_SYSTEM_TIMEOUT(admin_queue->completion_timeout); + + while (1) { + ENA_SPINLOCK_LOCK(admin_queue->q_lock, flags); + ena_com_handle_admin_completion(admin_queue); + ENA_SPINLOCK_UNLOCK(admin_queue->q_lock, flags); + + if (comp_ctx->status != ENA_CMD_SUBMITTED) + break; - while (comp_ctx->status == ENA_CMD_SUBMITTED) { - if ((ENA_GET_SYSTEM_USECS() - start_time) > - ADMIN_CMD_TIMEOUT_US) { + if (ENA_TIME_EXPIRE(timeout)) { ena_trc_err("Wait for completion (polling) timeout\n"); /* ENA didn't have any completion */ ENA_SPINLOCK_LOCK(admin_queue->q_lock, flags); @@ -535,9 +551,7 @@ ena_com_wait_and_process_admin_cq_polling( goto err; } - ENA_SPINLOCK_LOCK(admin_queue->q_lock, flags); - ena_com_handle_admin_completion(admin_queue); - ENA_SPINLOCK_UNLOCK(admin_queue->q_lock, flags); + ENA_MSLEEP(ENA_POLL_MS); } if (unlikely(comp_ctx->status == ENA_CMD_ABORTED)) { @@ -549,8 +563,8 @@ ena_com_wait_and_process_admin_cq_polling( goto err; } - ENA_ASSERT(comp_ctx->status == ENA_CMD_COMPLETED, - "Invalid comp status %d\n", comp_ctx->status); + ENA_WARN(comp_ctx->status != ENA_CMD_COMPLETED, + "Invalid comp status %d\n", comp_ctx->status); ret = ena_com_comp_status_to_errno(comp_ctx->comp_status); err: @@ -558,16 +572,14 @@ ena_com_wait_and_process_admin_cq_polling( return ret; } -static int -ena_com_wait_and_process_admin_cq_interrupts( - struct ena_comp_ctx *comp_ctx, - struct ena_com_admin_queue *admin_queue) +static int ena_com_wait_and_process_admin_cq_interrupts(struct ena_comp_ctx *comp_ctx, + struct ena_com_admin_queue *admin_queue) { unsigned long flags = 0; - int ret = 0; + int ret; ENA_WAIT_EVENT_WAIT(comp_ctx->wait_event, - ADMIN_CMD_TIMEOUT_US); + admin_queue->completion_timeout); /* In case the command wasn't completed find out the root cause. * There might be 2 kinds of errors @@ -607,16 +619,18 @@ static u32 ena_com_reg_bar_read32(struct ena_com_dev *ena_dev, u16 offset) struct ena_com_mmio_read *mmio_read = &ena_dev->mmio_read; volatile struct ena_admin_ena_mmio_req_read_less_resp *read_resp = mmio_read->read_resp; - u32 mmio_read_reg, ret; + u32 mmio_read_reg, ret, i; unsigned long flags = 0; - int i; + u32 timeout = mmio_read->reg_read_to; ENA_MIGHT_SLEEP(); + if (timeout == 0) + timeout = ENA_REG_READ_TIMEOUT; + /* If readless is disabled, perform regular read */ if (!mmio_read->readless_supported) - return ENA_REG_READ32((unsigned char *)ena_dev->reg_bar + - offset); + return ENA_REG_READ32(ena_dev->bus, ena_dev->reg_bar + offset); ENA_SPINLOCK_LOCK(mmio_read->lock, flags); mmio_read->seq_num++; @@ -632,17 +646,16 @@ static u32 ena_com_reg_bar_read32(struct ena_com_dev *ena_dev, u16 offset) */ wmb(); - ENA_REG_WRITE32(mmio_read_reg, (unsigned char *)ena_dev->reg_bar - + ENA_REGS_MMIO_REG_READ_OFF); + ENA_REG_WRITE32(ena_dev->bus, mmio_read_reg, ena_dev->reg_bar + ENA_REGS_MMIO_REG_READ_OFF); - for (i = 0; i < ENA_REG_READ_TIMEOUT; i++) { + for (i = 0; i < timeout; i++) { if (read_resp->req_id == mmio_read->seq_num) break; ENA_UDELAY(1); } - if (unlikely(i == ENA_REG_READ_TIMEOUT)) { + if (unlikely(i == timeout)) { ena_trc_err("reading reg failed for timeout. expected: req id[%hu] offset[%hu] actual: req id[%hu] offset[%hu]\n", mmio_read->seq_num, offset, @@ -653,7 +666,7 @@ static u32 ena_com_reg_bar_read32(struct ena_com_dev *ena_dev, u16 offset) } if (read_resp->reg_off != offset) { - ena_trc_err("reading failed for wrong offset value"); + ena_trc_err("Read failure: wrong offset provided"); ret = ENA_MMIO_READ_TIMEOUT; } else { ret = read_resp->reg_val; @@ -671,9 +684,8 @@ static u32 ena_com_reg_bar_read32(struct ena_com_dev *ena_dev, u16 offset) * It is expected that the IRQ called ena_com_handle_admin_completion * to mark the completions. */ -static int -ena_com_wait_and_process_admin_cq(struct ena_comp_ctx *comp_ctx, - struct ena_com_admin_queue *admin_queue) +static int ena_com_wait_and_process_admin_cq(struct ena_comp_ctx *comp_ctx, + struct ena_com_admin_queue *admin_queue) { if (admin_queue->polling) return ena_com_wait_and_process_admin_cq_polling(comp_ctx, @@ -692,7 +704,7 @@ static int ena_com_destroy_io_sq(struct ena_com_dev *ena_dev, u8 direction; int ret; - memset(&destroy_cmd, 0x0, sizeof(struct ena_admin_aq_destroy_sq_cmd)); + memset(&destroy_cmd, 0x0, sizeof(destroy_cmd)); if (io_sq->direction == ENA_COM_IO_QUEUE_DIRECTION_TX) direction = ENA_ADMIN_SQ_DIRECTION_TX; @@ -706,12 +718,11 @@ static int ena_com_destroy_io_sq(struct ena_com_dev *ena_dev, destroy_cmd.sq.sq_idx = io_sq->idx; destroy_cmd.aq_common_descriptor.opcode = ENA_ADMIN_DESTROY_SQ; - ret = ena_com_execute_admin_command( - admin_queue, - (struct ena_admin_aq_entry *)&destroy_cmd, - sizeof(destroy_cmd), - (struct ena_admin_acq_entry *)&destroy_resp, - sizeof(destroy_resp)); + ret = ena_com_execute_admin_command(admin_queue, + (struct ena_admin_aq_entry *)&destroy_cmd, + sizeof(destroy_cmd), + (struct ena_admin_acq_entry *)&destroy_resp, + sizeof(destroy_resp)); if (unlikely(ret && (ret != ENA_COM_NO_DEVICE))) ena_trc_err("failed to destroy io sq error: %d\n", ret); @@ -747,18 +758,20 @@ static void ena_com_io_queue_free(struct ena_com_dev *ena_dev, io_sq->desc_addr.phys_addr, io_sq->desc_addr.mem_handle); else - ENA_MEM_FREE(ena_dev->dmadev, - io_sq->desc_addr.virt_addr); + ENA_MEM_FREE(ena_dev->dmadev, io_sq->desc_addr.virt_addr); io_sq->desc_addr.virt_addr = NULL; } } -static int wait_for_reset_state(struct ena_com_dev *ena_dev, - u32 timeout, u16 exp_state) +static int wait_for_reset_state(struct ena_com_dev *ena_dev, u32 timeout, + u16 exp_state) { u32 val, i; + /* Convert timeout from resolution of 100ms to ENA_POLL_MS */ + timeout = (timeout * 100) / ENA_POLL_MS; + for (i = 0; i < timeout; i++) { val = ena_com_reg_bar_read32(ena_dev, ENA_REGS_DEV_STS_OFF); @@ -771,16 +784,14 @@ static int wait_for_reset_state(struct ena_com_dev *ena_dev, exp_state) return 0; - /* The resolution of the timeout is 100ms */ - ENA_MSLEEP(100); + ENA_MSLEEP(ENA_POLL_MS); } return ENA_COM_TIMER_EXPIRED; } -static bool -ena_com_check_supported_feature_id(struct ena_com_dev *ena_dev, - enum ena_admin_aq_feature_id feature_id) +static bool ena_com_check_supported_feature_id(struct ena_com_dev *ena_dev, + enum ena_admin_aq_feature_id feature_id) { u32 feature_mask = 1 << feature_id; @@ -802,14 +813,9 @@ static int ena_com_get_feature_ex(struct ena_com_dev *ena_dev, struct ena_admin_get_feat_cmd get_cmd; int ret; - if (!ena_dev) { - ena_trc_err("%s : ena_dev is NULL\n", __func__); - return ENA_COM_NO_DEVICE; - } - if (!ena_com_check_supported_feature_id(ena_dev, feature_id)) { - ena_trc_info("Feature %d isn't supported\n", feature_id); - return ENA_COM_PERMISSION; + ena_trc_dbg("Feature %d isn't supported\n", feature_id); + return ENA_COM_UNSUPPORTED; } memset(&get_cmd, 0x0, sizeof(get_cmd)); @@ -945,10 +951,10 @@ static int ena_com_indirect_table_allocate(struct ena_com_dev *ena_dev, sizeof(struct ena_admin_rss_ind_table_entry); ENA_MEM_ALLOC_COHERENT(ena_dev->dmadev, - tbl_size, - rss->rss_ind_tbl, - rss->rss_ind_tbl_dma_addr, - rss->rss_ind_tbl_mem_handle); + tbl_size, + rss->rss_ind_tbl, + rss->rss_ind_tbl_dma_addr, + rss->rss_ind_tbl_mem_handle); if (unlikely(!rss->rss_ind_tbl)) goto mem_err1; @@ -1005,7 +1011,7 @@ static int ena_com_create_io_sq(struct ena_com_dev *ena_dev, u8 direction; int ret; - memset(&create_cmd, 0x0, sizeof(struct ena_admin_aq_create_sq_cmd)); + memset(&create_cmd, 0x0, sizeof(create_cmd)); create_cmd.aq_common_descriptor.opcode = ENA_ADMIN_CREATE_SQ; @@ -1041,12 +1047,11 @@ static int ena_com_create_io_sq(struct ena_com_dev *ena_dev, } } - ret = ena_com_execute_admin_command( - admin_queue, - (struct ena_admin_aq_entry *)&create_cmd, - sizeof(create_cmd), - (struct ena_admin_acq_entry *)&cmd_completion, - sizeof(cmd_completion)); + ret = ena_com_execute_admin_command(admin_queue, + (struct ena_admin_aq_entry *)&create_cmd, + sizeof(create_cmd), + (struct ena_admin_acq_entry *)&cmd_completion, + sizeof(cmd_completion)); if (unlikely(ret)) { ena_trc_err("Failed to create IO SQ. error: %d\n", ret); return ret; @@ -1133,9 +1138,8 @@ static int ena_com_init_interrupt_moderation_table(struct ena_com_dev *ena_dev) return 0; } -static void -ena_com_update_intr_delay_resolution(struct ena_com_dev *ena_dev, - u16 intr_delay_resolution) +static void ena_com_update_intr_delay_resolution(struct ena_com_dev *ena_dev, + u16 intr_delay_resolution) { struct ena_intr_moder_entry *intr_moder_tbl = ena_dev->intr_moder_tbl; unsigned int i; @@ -1165,13 +1169,18 @@ int ena_com_execute_admin_command(struct ena_com_admin_queue *admin_queue, size_t comp_size) { struct ena_comp_ctx *comp_ctx; - int ret = 0; + int ret; comp_ctx = ena_com_submit_admin_cmd(admin_queue, cmd, cmd_size, comp, comp_size); - if (unlikely(IS_ERR(comp_ctx))) { - ena_trc_err("Failed to submit command [%ld]\n", - PTR_ERR(comp_ctx)); + if (IS_ERR(comp_ctx)) { + if (comp_ctx == ERR_PTR(ENA_COM_NO_DEVICE)) + ena_trc_dbg("Failed to submit command [%ld]\n", + PTR_ERR(comp_ctx)); + else + ena_trc_err("Failed to submit command [%ld]\n", + PTR_ERR(comp_ctx)); + return PTR_ERR(comp_ctx); } @@ -1195,7 +1204,7 @@ int ena_com_create_io_cq(struct ena_com_dev *ena_dev, struct ena_admin_acq_create_cq_resp_desc cmd_completion; int ret; - memset(&create_cmd, 0x0, sizeof(struct ena_admin_aq_create_cq_cmd)); + memset(&create_cmd, 0x0, sizeof(create_cmd)); create_cmd.aq_common_descriptor.opcode = ENA_ADMIN_CREATE_CQ; @@ -1215,12 +1224,11 @@ int ena_com_create_io_cq(struct ena_com_dev *ena_dev, return ret; } - ret = ena_com_execute_admin_command( - admin_queue, - (struct ena_admin_aq_entry *)&create_cmd, - sizeof(create_cmd), - (struct ena_admin_acq_entry *)&cmd_completion, - sizeof(cmd_completion)); + ret = ena_com_execute_admin_command(admin_queue, + (struct ena_admin_aq_entry *)&create_cmd, + sizeof(create_cmd), + (struct ena_admin_acq_entry *)&cmd_completion, + sizeof(cmd_completion)); if (unlikely(ret)) { ena_trc_err("Failed to create IO CQ. error: %d\n", ret); return ret; @@ -1290,7 +1298,7 @@ void ena_com_wait_for_abort_completion(struct ena_com_dev *ena_dev) ENA_SPINLOCK_LOCK(admin_queue->q_lock, flags); while (ATOMIC32_READ(&admin_queue->outstanding_cmds) != 0) { ENA_SPINLOCK_UNLOCK(admin_queue->q_lock, flags); - ENA_MSLEEP(20); + ENA_MSLEEP(ENA_POLL_MS); ENA_SPINLOCK_LOCK(admin_queue->q_lock, flags); } ENA_SPINLOCK_UNLOCK(admin_queue->q_lock, flags); @@ -1304,17 +1312,16 @@ int ena_com_destroy_io_cq(struct ena_com_dev *ena_dev, struct ena_admin_acq_destroy_cq_resp_desc destroy_resp; int ret; - memset(&destroy_cmd, 0x0, sizeof(struct ena_admin_aq_destroy_sq_cmd)); + memset(&destroy_cmd, 0x0, sizeof(destroy_cmd)); destroy_cmd.cq_idx = io_cq->idx; destroy_cmd.aq_common_descriptor.opcode = ENA_ADMIN_DESTROY_CQ; - ret = ena_com_execute_admin_command( - admin_queue, - (struct ena_admin_aq_entry *)&destroy_cmd, - sizeof(destroy_cmd), - (struct ena_admin_acq_entry *)&destroy_resp, - sizeof(destroy_resp)); + ret = ena_com_execute_admin_command(admin_queue, + (struct ena_admin_aq_entry *)&destroy_cmd, + sizeof(destroy_cmd), + (struct ena_admin_acq_entry *)&destroy_resp, + sizeof(destroy_resp)); if (unlikely(ret && (ret != ENA_COM_NO_DEVICE))) ena_trc_err("Failed to destroy IO CQ. error: %d\n", ret); @@ -1341,13 +1348,12 @@ void ena_com_admin_aenq_enable(struct ena_com_dev *ena_dev) { u16 depth = ena_dev->aenq.q_depth; - ENA_ASSERT(ena_dev->aenq.head == depth, "Invalid AENQ state\n"); + ENA_WARN(ena_dev->aenq.head != depth, "Invalid AENQ state\n"); /* Init head_db to mark that all entries in the queue * are initially available */ - ENA_REG_WRITE32(depth, (unsigned char *)ena_dev->reg_bar - + ENA_REGS_AENQ_HEAD_DB_OFF); + ENA_REG_WRITE32(ena_dev->bus, depth, ena_dev->reg_bar + ENA_REGS_AENQ_HEAD_DB_OFF); } int ena_com_set_aenq_config(struct ena_com_dev *ena_dev, u32 groups_flag) @@ -1356,12 +1362,7 @@ int ena_com_set_aenq_config(struct ena_com_dev *ena_dev, u32 groups_flag) struct ena_admin_set_feat_cmd cmd; struct ena_admin_set_feat_resp resp; struct ena_admin_get_feat_resp get_resp; - int ret = 0; - - if (unlikely(!ena_dev)) { - ena_trc_err("%s : ena_dev is NULL\n", __func__); - return ENA_COM_NO_DEVICE; - } + int ret; ret = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_AENQ_CONFIG); if (ret) { @@ -1373,7 +1374,7 @@ int ena_com_set_aenq_config(struct ena_com_dev *ena_dev, u32 groups_flag) ena_trc_warn("Trying to set unsupported aenq events. supported flag: %x asked flag: %x\n", get_resp.u.aenq.supported_groups, groups_flag); - return ENA_COM_PERMISSION; + return ENA_COM_UNSUPPORTED; } memset(&cmd, 0x0, sizeof(cmd)); @@ -1476,41 +1477,42 @@ int ena_com_validate_version(struct ena_com_dev *ena_dev) void ena_com_admin_destroy(struct ena_com_dev *ena_dev) { struct ena_com_admin_queue *admin_queue = &ena_dev->admin_queue; + struct ena_com_admin_cq *cq = &admin_queue->cq; + struct ena_com_admin_sq *sq = &admin_queue->sq; + struct ena_com_aenq *aenq = &ena_dev->aenq; + u16 size; - if (!admin_queue) - return; - + ENA_WAIT_EVENT_DESTROY(admin_queue->comp_ctx->wait_event); if (admin_queue->comp_ctx) ENA_MEM_FREE(ena_dev->dmadev, admin_queue->comp_ctx); admin_queue->comp_ctx = NULL; - - if (admin_queue->sq.entries) - ENA_MEM_FREE_COHERENT(ena_dev->dmadev, - ADMIN_SQ_SIZE(admin_queue->q_depth), - admin_queue->sq.entries, - admin_queue->sq.dma_addr, - admin_queue->sq.mem_handle); - admin_queue->sq.entries = NULL; - - if (admin_queue->cq.entries) - ENA_MEM_FREE_COHERENT(ena_dev->dmadev, - ADMIN_CQ_SIZE(admin_queue->q_depth), - admin_queue->cq.entries, - admin_queue->cq.dma_addr, - admin_queue->cq.mem_handle); - admin_queue->cq.entries = NULL; - + size = ADMIN_SQ_SIZE(admin_queue->q_depth); + if (sq->entries) + ENA_MEM_FREE_COHERENT(ena_dev->dmadev, size, sq->entries, + sq->dma_addr, sq->mem_handle); + sq->entries = NULL; + + size = ADMIN_CQ_SIZE(admin_queue->q_depth); + if (cq->entries) + ENA_MEM_FREE_COHERENT(ena_dev->dmadev, size, cq->entries, + cq->dma_addr, cq->mem_handle); + cq->entries = NULL; + + size = ADMIN_AENQ_SIZE(aenq->q_depth); if (ena_dev->aenq.entries) - ENA_MEM_FREE_COHERENT(ena_dev->dmadev, - ADMIN_AENQ_SIZE(ena_dev->aenq.q_depth), - ena_dev->aenq.entries, - ena_dev->aenq.dma_addr, - ena_dev->aenq.mem_handle); - ena_dev->aenq.entries = NULL; + ENA_MEM_FREE_COHERENT(ena_dev->dmadev, size, aenq->entries, + aenq->dma_addr, aenq->mem_handle); + aenq->entries = NULL; } void ena_com_set_admin_polling_mode(struct ena_com_dev *ena_dev, bool polling) { + u32 mask_value = 0; + + if (polling) + mask_value = ENA_REGS_ADMIN_INTR_MASK; + + ENA_REG_WRITE32(ena_dev->bus, mask_value, ena_dev->reg_bar + ENA_REGS_INTR_MASK_OFF); ena_dev->admin_queue.polling = polling; } @@ -1536,8 +1538,7 @@ int ena_com_mmio_reg_read_request_init(struct ena_com_dev *ena_dev) return 0; } -void -ena_com_set_mmio_read_mode(struct ena_com_dev *ena_dev, bool readless_supported) +void ena_com_set_mmio_read_mode(struct ena_com_dev *ena_dev, bool readless_supported) { struct ena_com_mmio_read *mmio_read = &ena_dev->mmio_read; @@ -1548,10 +1549,8 @@ void ena_com_mmio_reg_read_request_destroy(struct ena_com_dev *ena_dev) { struct ena_com_mmio_read *mmio_read = &ena_dev->mmio_read; - ENA_REG_WRITE32(0x0, (unsigned char *)ena_dev->reg_bar - + ENA_REGS_MMIO_RESP_LO_OFF); - ENA_REG_WRITE32(0x0, (unsigned char *)ena_dev->reg_bar - + ENA_REGS_MMIO_RESP_HI_OFF); + ENA_REG_WRITE32(ena_dev->bus, 0x0, ena_dev->reg_bar + ENA_REGS_MMIO_RESP_LO_OFF); + ENA_REG_WRITE32(ena_dev->bus, 0x0, ena_dev->reg_bar + ENA_REGS_MMIO_RESP_HI_OFF); ENA_MEM_FREE_COHERENT(ena_dev->dmadev, sizeof(*mmio_read->read_resp), @@ -1570,10 +1569,8 @@ void ena_com_mmio_reg_read_request_write_dev_addr(struct ena_com_dev *ena_dev) addr_low = ENA_DMA_ADDR_TO_UINT32_LOW(mmio_read->read_resp_dma_addr); addr_high = ENA_DMA_ADDR_TO_UINT32_HIGH(mmio_read->read_resp_dma_addr); - ENA_REG_WRITE32(addr_low, (unsigned char *)ena_dev->reg_bar - + ENA_REGS_MMIO_RESP_LO_OFF); - ENA_REG_WRITE32(addr_high, (unsigned char *)ena_dev->reg_bar - + ENA_REGS_MMIO_RESP_HI_OFF); + ENA_REG_WRITE32(ena_dev->bus, addr_low, ena_dev->reg_bar + ENA_REGS_MMIO_RESP_LO_OFF); + ENA_REG_WRITE32(ena_dev->bus, addr_high, ena_dev->reg_bar + ENA_REGS_MMIO_RESP_HI_OFF); } int ena_com_admin_init(struct ena_com_dev *ena_dev, @@ -1619,24 +1616,20 @@ int ena_com_admin_init(struct ena_com_dev *ena_dev, if (ret) goto error; - admin_queue->sq.db_addr = (u32 __iomem *) - ((unsigned char *)ena_dev->reg_bar + ENA_REGS_AQ_DB_OFF); + admin_queue->sq.db_addr = (u32 __iomem *)((uintptr_t)ena_dev->reg_bar + + ENA_REGS_AQ_DB_OFF); addr_low = ENA_DMA_ADDR_TO_UINT32_LOW(admin_queue->sq.dma_addr); addr_high = ENA_DMA_ADDR_TO_UINT32_HIGH(admin_queue->sq.dma_addr); - ENA_REG_WRITE32(addr_low, (unsigned char *)ena_dev->reg_bar - + ENA_REGS_AQ_BASE_LO_OFF); - ENA_REG_WRITE32(addr_high, (unsigned char *)ena_dev->reg_bar - + ENA_REGS_AQ_BASE_HI_OFF); + ENA_REG_WRITE32(ena_dev->bus, addr_low, ena_dev->reg_bar + ENA_REGS_AQ_BASE_LO_OFF); + ENA_REG_WRITE32(ena_dev->bus, addr_high, ena_dev->reg_bar + ENA_REGS_AQ_BASE_HI_OFF); addr_low = ENA_DMA_ADDR_TO_UINT32_LOW(admin_queue->cq.dma_addr); addr_high = ENA_DMA_ADDR_TO_UINT32_HIGH(admin_queue->cq.dma_addr); - ENA_REG_WRITE32(addr_low, (unsigned char *)ena_dev->reg_bar - + ENA_REGS_ACQ_BASE_LO_OFF); - ENA_REG_WRITE32(addr_high, (unsigned char *)ena_dev->reg_bar - + ENA_REGS_ACQ_BASE_HI_OFF); + ENA_REG_WRITE32(ena_dev->bus, addr_low, ena_dev->reg_bar + ENA_REGS_ACQ_BASE_LO_OFF); + ENA_REG_WRITE32(ena_dev->bus, addr_high, ena_dev->reg_bar + ENA_REGS_ACQ_BASE_HI_OFF); aq_caps = 0; aq_caps |= admin_queue->q_depth & ENA_REGS_AQ_CAPS_AQ_DEPTH_MASK; @@ -1650,10 +1643,8 @@ int ena_com_admin_init(struct ena_com_dev *ena_dev, ENA_REGS_ACQ_CAPS_ACQ_ENTRY_SIZE_SHIFT) & ENA_REGS_ACQ_CAPS_ACQ_ENTRY_SIZE_MASK; - ENA_REG_WRITE32(aq_caps, (unsigned char *)ena_dev->reg_bar - + ENA_REGS_AQ_CAPS_OFF); - ENA_REG_WRITE32(acq_caps, (unsigned char *)ena_dev->reg_bar - + ENA_REGS_ACQ_CAPS_OFF); + ENA_REG_WRITE32(ena_dev->bus, aq_caps, ena_dev->reg_bar + ENA_REGS_AQ_CAPS_OFF); + ENA_REG_WRITE32(ena_dev->bus, acq_caps, ena_dev->reg_bar + ENA_REGS_ACQ_CAPS_OFF); ret = ena_com_admin_init_aenq(ena_dev, aenq_handlers); if (ret) goto error; @@ -1672,7 +1663,7 @@ int ena_com_create_io_queue(struct ena_com_dev *ena_dev, { struct ena_com_io_sq *io_sq; struct ena_com_io_cq *io_cq; - int ret = 0; + int ret; if (ctx->qid >= ENA_TOTAL_NUM_QUEUES) { ena_trc_err("Qid (%d) is bigger than max num of queues (%d)\n", @@ -1683,8 +1674,8 @@ int ena_com_create_io_queue(struct ena_com_dev *ena_dev, io_sq = &ena_dev->io_sq_queues[ctx->qid]; io_cq = &ena_dev->io_cq_queues[ctx->qid]; - memset(io_sq, 0x0, sizeof(struct ena_com_io_sq)); - memset(io_cq, 0x0, sizeof(struct ena_com_io_cq)); + memset(io_sq, 0x0, sizeof(*io_sq)); + memset(io_cq, 0x0, sizeof(*io_cq)); /* Init CQ */ io_cq->q_depth = ctx->queue_size; @@ -1794,6 +1785,19 @@ int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev, memcpy(&get_feat_ctx->offload, &get_resp.u.offload, sizeof(get_resp.u.offload)); + /* Driver hints isn't mandatory admin command. So in case the + * command isn't supported set driver hints to 0 + */ + rc = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_HW_HINTS); + + if (!rc) + memcpy(&get_feat_ctx->hw_hints, &get_resp.u.hw_hints, + sizeof(get_resp.u.hw_hints)); + else if (rc == ENA_COM_UNSUPPORTED) + memset(&get_feat_ctx->hw_hints, 0x0, sizeof(get_feat_ctx->hw_hints)); + else + return rc; + return 0; } @@ -1840,7 +1844,7 @@ void ena_com_aenq_intr_handler(struct ena_com_dev *dev, void *data) ena_trc_dbg("AENQ! Group[%x] Syndrom[%x] timestamp: [%llus]\n", aenq_common->group, aenq_common->syndrom, - (unsigned long long)aenq_common->timestamp_low + + (u64)aenq_common->timestamp_low + ((u64)aenq_common->timestamp_high << 32)); /* Handle specific event*/ @@ -1869,11 +1873,11 @@ void ena_com_aenq_intr_handler(struct ena_com_dev *dev, void *data) /* write the aenq doorbell after all AENQ descriptors were read */ mb(); - ENA_REG_WRITE32((u32)aenq->head, (unsigned char *)dev->reg_bar - + ENA_REGS_AENQ_HEAD_DB_OFF); + ENA_REG_WRITE32(dev->bus, (u32)aenq->head, dev->reg_bar + ENA_REGS_AENQ_HEAD_DB_OFF); } -int ena_com_dev_reset(struct ena_com_dev *ena_dev) +int ena_com_dev_reset(struct ena_com_dev *ena_dev, + enum ena_regs_reset_reason_types reset_reason) { u32 stat, timeout, cap, reset_val; int rc; @@ -1901,8 +1905,9 @@ int ena_com_dev_reset(struct ena_com_dev *ena_dev) /* start reset */ reset_val = ENA_REGS_DEV_CTL_DEV_RESET_MASK; - ENA_REG_WRITE32(reset_val, (unsigned char *)ena_dev->reg_bar - + ENA_REGS_DEV_CTL_OFF); + reset_val |= (reset_reason << ENA_REGS_DEV_CTL_RESET_REASON_SHIFT) & + ENA_REGS_DEV_CTL_RESET_REASON_MASK; + ENA_REG_WRITE32(ena_dev->bus, reset_val, ena_dev->reg_bar + ENA_REGS_DEV_CTL_OFF); /* Write again the MMIO read request address */ ena_com_mmio_reg_read_request_write_dev_addr(ena_dev); @@ -1915,29 +1920,32 @@ int ena_com_dev_reset(struct ena_com_dev *ena_dev) } /* reset done */ - ENA_REG_WRITE32(0, (unsigned char *)ena_dev->reg_bar - + ENA_REGS_DEV_CTL_OFF); + ENA_REG_WRITE32(ena_dev->bus, 0, ena_dev->reg_bar + ENA_REGS_DEV_CTL_OFF); rc = wait_for_reset_state(ena_dev, timeout, 0); if (rc != 0) { ena_trc_err("Reset indication didn't turn off\n"); return rc; } + timeout = (cap & ENA_REGS_CAPS_ADMIN_CMD_TO_MASK) >> + ENA_REGS_CAPS_ADMIN_CMD_TO_SHIFT; + if (timeout) + /* the resolution of timeout reg is 100ms */ + ena_dev->admin_queue.completion_timeout = timeout * 100000; + else + ena_dev->admin_queue.completion_timeout = ADMIN_CMD_TIMEOUT_US; + return 0; } static int ena_get_dev_stats(struct ena_com_dev *ena_dev, - struct ena_admin_aq_get_stats_cmd *get_cmd, - struct ena_admin_acq_get_stats_resp *get_resp, + struct ena_com_stats_ctx *ctx, enum ena_admin_get_stats_type type) { + struct ena_admin_aq_get_stats_cmd *get_cmd = &ctx->get_cmd; + struct ena_admin_acq_get_stats_resp *get_resp = &ctx->get_resp; struct ena_com_admin_queue *admin_queue; - int ret = 0; - - if (!ena_dev) { - ena_trc_err("%s : ena_dev is NULL\n", __func__); - return ENA_COM_NO_DEVICE; - } + int ret; admin_queue = &ena_dev->admin_queue; @@ -1945,12 +1953,11 @@ static int ena_get_dev_stats(struct ena_com_dev *ena_dev, get_cmd->aq_common_descriptor.flags = 0; get_cmd->type = type; - ret = ena_com_execute_admin_command( - admin_queue, - (struct ena_admin_aq_entry *)get_cmd, - sizeof(*get_cmd), - (struct ena_admin_acq_entry *)get_resp, - sizeof(*get_resp)); + ret = ena_com_execute_admin_command(admin_queue, + (struct ena_admin_aq_entry *)get_cmd, + sizeof(*get_cmd), + (struct ena_admin_acq_entry *)get_resp, + sizeof(*get_resp)); if (unlikely(ret)) ena_trc_err("Failed to get stats. error: %d\n", ret); @@ -1961,78 +1968,28 @@ static int ena_get_dev_stats(struct ena_com_dev *ena_dev, int ena_com_get_dev_basic_stats(struct ena_com_dev *ena_dev, struct ena_admin_basic_stats *stats) { - int ret = 0; - struct ena_admin_aq_get_stats_cmd get_cmd; - struct ena_admin_acq_get_stats_resp get_resp; + struct ena_com_stats_ctx ctx; + int ret; - memset(&get_cmd, 0x0, sizeof(get_cmd)); - ret = ena_get_dev_stats(ena_dev, &get_cmd, &get_resp, - ENA_ADMIN_GET_STATS_TYPE_BASIC); + memset(&ctx, 0x0, sizeof(ctx)); + ret = ena_get_dev_stats(ena_dev, &ctx, ENA_ADMIN_GET_STATS_TYPE_BASIC); if (likely(ret == 0)) - memcpy(stats, &get_resp.basic_stats, - sizeof(get_resp.basic_stats)); + memcpy(stats, &ctx.get_resp.basic_stats, + sizeof(ctx.get_resp.basic_stats)); return ret; } -int ena_com_get_dev_extended_stats(struct ena_com_dev *ena_dev, char *buff, - u32 len) -{ - int ret = 0; - struct ena_admin_aq_get_stats_cmd get_cmd; - struct ena_admin_acq_get_stats_resp get_resp; - ena_mem_handle_t mem_handle = 0; - void *virt_addr; - dma_addr_t phys_addr; - - ENA_MEM_ALLOC_COHERENT(ena_dev->dmadev, len, - virt_addr, phys_addr, mem_handle); - if (!virt_addr) { - ret = ENA_COM_NO_MEM; - goto done; - } - memset(&get_cmd, 0x0, sizeof(get_cmd)); - ret = ena_com_mem_addr_set(ena_dev, - &get_cmd.u.control_buffer.address, - phys_addr); - if (unlikely(ret)) { - ena_trc_err("memory address set failed\n"); - return ret; - } - get_cmd.u.control_buffer.length = len; - - get_cmd.device_id = ena_dev->stats_func; - get_cmd.queue_idx = ena_dev->stats_queue; - - ret = ena_get_dev_stats(ena_dev, &get_cmd, &get_resp, - ENA_ADMIN_GET_STATS_TYPE_EXTENDED); - if (ret < 0) - goto free_ext_stats_mem; - - ret = snprintf(buff, len, "%s", (char *)virt_addr); - -free_ext_stats_mem: - ENA_MEM_FREE_COHERENT(ena_dev->dmadev, len, virt_addr, phys_addr, - mem_handle); -done: - return ret; -} - int ena_com_set_dev_mtu(struct ena_com_dev *ena_dev, int mtu) { struct ena_com_admin_queue *admin_queue; struct ena_admin_set_feat_cmd cmd; struct ena_admin_set_feat_resp resp; - int ret = 0; - - if (unlikely(!ena_dev)) { - ena_trc_err("%s : ena_dev is NULL\n", __func__); - return ENA_COM_NO_DEVICE; - } + int ret; if (!ena_com_check_supported_feature_id(ena_dev, ENA_ADMIN_MTU)) { - ena_trc_info("Feature %d isn't supported\n", ENA_ADMIN_MTU); - return ENA_COM_PERMISSION; + ena_trc_dbg("Feature %d isn't supported\n", ENA_ADMIN_MTU); + return ENA_COM_UNSUPPORTED; } memset(&cmd, 0x0, sizeof(cmd)); @@ -2049,11 +2006,10 @@ int ena_com_set_dev_mtu(struct ena_com_dev *ena_dev, int mtu) (struct ena_admin_acq_entry *)&resp, sizeof(resp)); - if (unlikely(ret)) { + if (unlikely(ret)) ena_trc_err("Failed to set mtu %d. error: %d\n", mtu, ret); - return ENA_COM_INVAL; - } - return 0; + + return ret; } int ena_com_get_offload_settings(struct ena_com_dev *ena_dev, @@ -2066,7 +2022,7 @@ int ena_com_get_offload_settings(struct ena_com_dev *ena_dev, ENA_ADMIN_STATELESS_OFFLOAD_CONFIG); if (unlikely(ret)) { ena_trc_err("Failed to get offload capabilities %d\n", ret); - return ENA_COM_INVAL; + return ret; } memcpy(offload, &resp.u.offload, sizeof(resp.u.offload)); @@ -2085,9 +2041,9 @@ int ena_com_set_hash_function(struct ena_com_dev *ena_dev) if (!ena_com_check_supported_feature_id(ena_dev, ENA_ADMIN_RSS_HASH_FUNCTION)) { - ena_trc_info("Feature %d isn't supported\n", - ENA_ADMIN_RSS_HASH_FUNCTION); - return ENA_COM_PERMISSION; + ena_trc_dbg("Feature %d isn't supported\n", + ENA_ADMIN_RSS_HASH_FUNCTION); + return ENA_COM_UNSUPPORTED; } /* Validate hash function is supported */ @@ -2099,7 +2055,7 @@ int ena_com_set_hash_function(struct ena_com_dev *ena_dev) if (get_resp.u.flow_hash_func.supported_func & (1 << rss->hash_func)) { ena_trc_err("Func hash %d isn't supported by device, abort\n", rss->hash_func); - return ENA_COM_PERMISSION; + return ENA_COM_UNSUPPORTED; } memset(&cmd, 0x0, sizeof(cmd)); @@ -2158,7 +2114,7 @@ int ena_com_fill_hash_function(struct ena_com_dev *ena_dev, if (!((1 << func) & get_resp.u.flow_hash_func.supported_func)) { ena_trc_err("Flow hash function %d isn't supported\n", func); - return ENA_COM_PERMISSION; + return ENA_COM_UNSUPPORTED; } switch (func) { @@ -2207,7 +2163,7 @@ int ena_com_get_hash_function(struct ena_com_dev *ena_dev, if (unlikely(rc)) return rc; - rss->hash_func = (enum ena_admin_hash_functions)get_resp.u.flow_hash_func.selected_func; + rss->hash_func = get_resp.u.flow_hash_func.selected_func; if (func) *func = rss->hash_func; @@ -2242,17 +2198,20 @@ int ena_com_set_hash_ctrl(struct ena_com_dev *ena_dev) { struct ena_com_admin_queue *admin_queue = &ena_dev->admin_queue; struct ena_rss *rss = &ena_dev->rss; + struct ena_admin_feature_rss_hash_control *hash_ctrl = rss->hash_ctrl; struct ena_admin_set_feat_cmd cmd; struct ena_admin_set_feat_resp resp; int ret; if (!ena_com_check_supported_feature_id(ena_dev, ENA_ADMIN_RSS_HASH_INPUT)) { - ena_trc_info("Feature %d isn't supported\n", - ENA_ADMIN_RSS_HASH_INPUT); - return ENA_COM_PERMISSION; + ena_trc_dbg("Feature %d isn't supported\n", + ENA_ADMIN_RSS_HASH_INPUT); + return ENA_COM_UNSUPPORTED; } + memset(&cmd, 0x0, sizeof(cmd)); + cmd.aq_common_descriptor.opcode = ENA_ADMIN_SET_FEATURE; cmd.aq_common_descriptor.flags = ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT_MASK; @@ -2268,20 +2227,17 @@ int ena_com_set_hash_ctrl(struct ena_com_dev *ena_dev) ena_trc_err("memory address set failed\n"); return ret; } - cmd.control_buffer.length = - sizeof(struct ena_admin_feature_rss_hash_control); + cmd.control_buffer.length = sizeof(*hash_ctrl); ret = ena_com_execute_admin_command(admin_queue, (struct ena_admin_aq_entry *)&cmd, sizeof(cmd), (struct ena_admin_acq_entry *)&resp, sizeof(resp)); - if (unlikely(ret)) { + if (unlikely(ret)) ena_trc_err("Failed to set hash input. error: %d\n", ret); - return ENA_COM_INVAL; - } - return 0; + return ret; } int ena_com_set_default_hash_ctrl(struct ena_com_dev *ena_dev) @@ -2293,7 +2249,7 @@ int ena_com_set_default_hash_ctrl(struct ena_com_dev *ena_dev) int rc, i; /* Get the supported hash input */ - rc = ena_com_get_hash_ctrl(ena_dev, (enum ena_admin_flow_hash_proto)0, NULL); + rc = ena_com_get_hash_ctrl(ena_dev, 0, NULL); if (unlikely(rc)) return rc; @@ -2322,7 +2278,7 @@ int ena_com_set_default_hash_ctrl(struct ena_com_dev *ena_dev) hash_ctrl->selected_fields[ENA_ADMIN_RSS_IP4_FRAG].fields = ENA_ADMIN_RSS_L3_SA | ENA_ADMIN_RSS_L3_DA; - hash_ctrl->selected_fields[ENA_ADMIN_RSS_IP4_FRAG].fields = + hash_ctrl->selected_fields[ENA_ADMIN_RSS_NOT_IP].fields = ENA_ADMIN_RSS_L2_DA | ENA_ADMIN_RSS_L2_SA; for (i = 0; i < ENA_ADMIN_RSS_PROTO_NUM; i++) { @@ -2332,7 +2288,7 @@ int ena_com_set_default_hash_ctrl(struct ena_com_dev *ena_dev) ena_trc_err("hash control doesn't support all the desire configuration. proto %x supported %x selected %x\n", i, hash_ctrl->supported_fields[i].fields, hash_ctrl->selected_fields[i].fields); - return ENA_COM_PERMISSION; + return ENA_COM_UNSUPPORTED; } } @@ -2340,7 +2296,7 @@ int ena_com_set_default_hash_ctrl(struct ena_com_dev *ena_dev) /* In case of failure, restore the old hash ctrl */ if (unlikely(rc)) - ena_com_get_hash_ctrl(ena_dev, (enum ena_admin_flow_hash_proto)0, NULL); + ena_com_get_hash_ctrl(ena_dev, 0, NULL); return rc; } @@ -2377,7 +2333,7 @@ int ena_com_fill_hash_ctrl(struct ena_com_dev *ena_dev, /* In case of failure, restore the old hash ctrl */ if (unlikely(rc)) - ena_com_get_hash_ctrl(ena_dev, (enum ena_admin_flow_hash_proto)0, NULL); + ena_com_get_hash_ctrl(ena_dev, 0, NULL); return 0; } @@ -2404,14 +2360,13 @@ int ena_com_indirect_table_set(struct ena_com_dev *ena_dev) struct ena_rss *rss = &ena_dev->rss; struct ena_admin_set_feat_cmd cmd; struct ena_admin_set_feat_resp resp; - int ret = 0; + int ret; - if (!ena_com_check_supported_feature_id( - ena_dev, - ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG)) { - ena_trc_info("Feature %d isn't supported\n", - ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG); - return ENA_COM_PERMISSION; + if (!ena_com_check_supported_feature_id(ena_dev, + ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG)) { + ena_trc_dbg("Feature %d isn't supported\n", + ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG); + return ENA_COM_UNSUPPORTED; } ret = ena_com_ind_tbl_convert_to_device(ena_dev); @@ -2446,12 +2401,10 @@ int ena_com_indirect_table_set(struct ena_com_dev *ena_dev) (struct ena_admin_acq_entry *)&resp, sizeof(resp)); - if (unlikely(ret)) { + if (unlikely(ret)) ena_trc_err("Failed to set indirect table. error: %d\n", ret); - return ENA_COM_INVAL; - } - return 0; + return ret; } int ena_com_indirect_table_get(struct ena_com_dev *ena_dev, u32 *ind_tbl) @@ -2538,17 +2491,18 @@ int ena_com_allocate_host_info(struct ena_com_dev *ena_dev) } int ena_com_allocate_debug_area(struct ena_com_dev *ena_dev, - u32 debug_area_size) { + u32 debug_area_size) +{ struct ena_host_attribute *host_attr = &ena_dev->host_attr; - ENA_MEM_ALLOC_COHERENT(ena_dev->dmadev, - debug_area_size, - host_attr->debug_area_virt_addr, - host_attr->debug_area_dma_addr, - host_attr->debug_area_dma_handle); - if (unlikely(!host_attr->debug_area_virt_addr)) { - host_attr->debug_area_size = 0; - return ENA_COM_NO_MEM; + ENA_MEM_ALLOC_COHERENT(ena_dev->dmadev, + debug_area_size, + host_attr->debug_area_virt_addr, + host_attr->debug_area_dma_addr, + host_attr->debug_area_dma_handle); + if (unlikely(!host_attr->debug_area_virt_addr)) { + host_attr->debug_area_size = 0; + return ENA_COM_NO_MEM; } host_attr->debug_area_size = debug_area_size; @@ -2590,6 +2544,7 @@ int ena_com_set_host_attributes(struct ena_com_dev *ena_dev) struct ena_com_admin_queue *admin_queue; struct ena_admin_set_feat_cmd cmd; struct ena_admin_set_feat_resp resp; + int ret; /* Host attribute config is called before ena_com_get_dev_attr_feat @@ -2635,14 +2590,12 @@ int ena_com_set_host_attributes(struct ena_com_dev *ena_dev) /* Interrupt moderation */ bool ena_com_interrupt_moderation_supported(struct ena_com_dev *ena_dev) { - return ena_com_check_supported_feature_id( - ena_dev, - ENA_ADMIN_INTERRUPT_MODERATION); + return ena_com_check_supported_feature_id(ena_dev, + ENA_ADMIN_INTERRUPT_MODERATION); } -int -ena_com_update_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev, - u32 tx_coalesce_usecs) +int ena_com_update_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev, + u32 tx_coalesce_usecs) { if (!ena_dev->intr_delay_resolution) { ena_trc_err("Illegal interrupt delay granularity value\n"); @@ -2655,9 +2608,8 @@ ena_com_update_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev, return 0; } -int -ena_com_update_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev, - u32 rx_coalesce_usecs) +int ena_com_update_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev, + u32 rx_coalesce_usecs) { if (!ena_dev->intr_delay_resolution) { ena_trc_err("Illegal interrupt delay granularity value\n"); @@ -2690,9 +2642,9 @@ int ena_com_init_interrupt_moderation(struct ena_com_dev *ena_dev) ENA_ADMIN_INTERRUPT_MODERATION); if (rc) { - if (rc == ENA_COM_PERMISSION) { - ena_trc_info("Feature %d isn't supported\n", - ENA_ADMIN_INTERRUPT_MODERATION); + if (rc == ENA_COM_UNSUPPORTED) { + ena_trc_dbg("Feature %d isn't supported\n", + ENA_ADMIN_INTERRUPT_MODERATION); rc = 0; } else { ena_trc_err("Failed to get interrupt moderation admin cmd. rc: %d\n", @@ -2719,8 +2671,7 @@ int ena_com_init_interrupt_moderation(struct ena_com_dev *ena_dev) return rc; } -void -ena_com_config_default_interrupt_moderation_table(struct ena_com_dev *ena_dev) +void ena_com_config_default_interrupt_moderation_table(struct ena_com_dev *ena_dev) { struct ena_intr_moder_entry *intr_moder_tbl = ena_dev->intr_moder_tbl; @@ -2763,14 +2714,12 @@ ena_com_config_default_interrupt_moderation_table(struct ena_com_dev *ena_dev) ENA_INTR_HIGHEST_BYTES; } -unsigned int -ena_com_get_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev) +unsigned int ena_com_get_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev) { return ena_dev->intr_moder_tx_interval; } -unsigned int -ena_com_get_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev) +unsigned int ena_com_get_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev) { struct ena_intr_moder_entry *intr_moder_tbl = ena_dev->intr_moder_tbl; @@ -2794,7 +2743,10 @@ void ena_com_init_intr_moderation_entry(struct ena_com_dev *ena_dev, intr_moder_tbl[level].intr_moder_interval /= ena_dev->intr_delay_resolution; intr_moder_tbl[level].pkts_per_interval = entry->pkts_per_interval; - intr_moder_tbl[level].bytes_per_interval = entry->bytes_per_interval; + + /* use hardcoded value until ethtool supports bytecount parameter */ + if (entry->bytes_per_interval != ENA_INTR_BYTE_COUNT_NOT_SUPPORTED) + intr_moder_tbl[level].bytes_per_interval = entry->bytes_per_interval; } void ena_com_get_intr_moderation_entry(struct ena_com_dev *ena_dev, diff --git a/drivers/net/ena/base/ena_com.h b/drivers/net/ena/base/ena_com.h index e53459262..f58cd86a8 100644 --- a/drivers/net/ena/base/ena_com.h +++ b/drivers/net/ena/base/ena_com.h @@ -35,15 +35,7 @@ #define ENA_COM #include "ena_plat.h" -#include "ena_common_defs.h" -#include "ena_admin_defs.h" -#include "ena_eth_io_defs.h" -#include "ena_regs_defs.h" -#if defined(__linux__) && !defined(__KERNEL__) -#include -#include -#define __iomem -#endif +#include "ena_includes.h" #define ENA_MAX_NUM_IO_QUEUES 128U /* We need to queues for each IO (on for Tx and one for Rx) */ @@ -89,6 +81,11 @@ #define ENA_INTR_DELAY_OLD_VALUE_WEIGHT 6 #define ENA_INTR_DELAY_NEW_VALUE_WEIGHT 4 +#define ENA_INTR_MODER_LEVEL_STRIDE 1 +#define ENA_INTR_BYTE_COUNT_NOT_SUPPORTED 0xFFFFFF + +#define ENA_HW_HINTS_NO_TIMEOUT 0xFFFF + enum ena_intr_moder_level { ENA_INTR_MODER_LOWEST = 0, ENA_INTR_MODER_LOW, @@ -120,8 +117,8 @@ struct ena_com_rx_buf_info { }; struct ena_com_io_desc_addr { - u8 __iomem *pbuf_dev_addr; /* LLQ address */ - u8 *virt_addr; + u8 __iomem *pbuf_dev_addr; /* LLQ address */ + u8 *virt_addr; dma_addr_t phys_addr; ena_mem_handle_t mem_handle; }; @@ -130,13 +127,12 @@ struct ena_com_tx_meta { u16 mss; u16 l3_hdr_len; u16 l3_hdr_offset; - u16 l3_outer_hdr_len; /* In words */ - u16 l3_outer_hdr_offset; u16 l4_hdr_len; /* In words */ }; struct ena_com_io_cq { struct ena_com_io_desc_addr cdesc_addr; + void *bus; /* Interrupt unmask register */ u32 __iomem *unmask_reg; @@ -174,6 +170,7 @@ struct ena_com_io_cq { struct ena_com_io_sq { struct ena_com_io_desc_addr desc_addr; + void *bus; u32 __iomem *db_addr; u8 __iomem *header_addr; @@ -228,8 +225,11 @@ struct ena_com_stats_admin { struct ena_com_admin_queue { void *q_dmadev; + void *bus; ena_spinlock_t q_lock; /* spinlock for the admin queue */ + struct ena_comp_ctx *comp_ctx; + u32 completion_timeout; u16 q_depth; struct ena_com_admin_cq cq; struct ena_com_admin_sq sq; @@ -266,6 +266,7 @@ struct ena_com_mmio_read { struct ena_admin_ena_mmio_req_read_less_resp *read_resp; dma_addr_t read_resp_dma_addr; ena_mem_handle_t read_resp_mem_handle; + u32 reg_read_to; /* in us */ u16 seq_num; bool readless_supported; /* spin lock to ensure a single outstanding read */ @@ -316,6 +317,7 @@ struct ena_com_dev { u8 __iomem *reg_bar; void __iomem *mem_bar; void *dmadev; + void *bus; enum ena_admin_placement_policy_type tx_mem_queue_type; u32 tx_max_header_size; @@ -340,6 +342,7 @@ struct ena_com_dev_get_features_ctx { struct ena_admin_device_attr_feature_desc dev_attr; struct ena_admin_feature_aenq_desc aenq; struct ena_admin_feature_offload_desc offload; + struct ena_admin_ena_hw_hints hw_hints; }; struct ena_com_create_io_ctx { @@ -379,7 +382,7 @@ int ena_com_mmio_reg_read_request_init(struct ena_com_dev *ena_dev); /* ena_com_set_mmio_read_mode - Enable/disable the mmio reg read mechanism * @ena_dev: ENA communication layer struct - * @realess_supported: readless mode (enable/disable) + * @readless_supported: readless mode (enable/disable) */ void ena_com_set_mmio_read_mode(struct ena_com_dev *ena_dev, bool readless_supported); @@ -421,14 +424,16 @@ void ena_com_admin_destroy(struct ena_com_dev *ena_dev); /* ena_com_dev_reset - Perform device FLR to the device. * @ena_dev: ENA communication layer struct + * @reset_reason: Specify what is the trigger for the reset in case of an error. * * @return - 0 on success, negative value on failure. */ -int ena_com_dev_reset(struct ena_com_dev *ena_dev); +int ena_com_dev_reset(struct ena_com_dev *ena_dev, + enum ena_regs_reset_reason_types reset_reason); /* ena_com_create_io_queue - Create io queue. * @ena_dev: ENA communication layer struct - * ena_com_create_io_ctx - create context structure + * @ctx - create context structure * * Create the submission and the completion queues. * @@ -437,8 +442,9 @@ int ena_com_dev_reset(struct ena_com_dev *ena_dev); int ena_com_create_io_queue(struct ena_com_dev *ena_dev, struct ena_com_create_io_ctx *ctx); -/* ena_com_admin_destroy - Destroy IO queue with the queue id - qid. +/* ena_com_destroy_io_queue - Destroy IO queue with the queue id - qid. * @ena_dev: ENA communication layer struct + * @qid - the caller virtual queue id. */ void ena_com_destroy_io_queue(struct ena_com_dev *ena_dev, u16 qid); @@ -581,9 +587,8 @@ int ena_com_set_aenq_config(struct ena_com_dev *ena_dev, u32 groups_flag); * * @return: 0 on Success and negative value otherwise. */ -int -ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev, - struct ena_com_dev_get_features_ctx *get_feat_ctx); +int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev, + struct ena_com_dev_get_features_ctx *get_feat_ctx); /* ena_com_get_dev_basic_stats - Get device basic statistics * @ena_dev: ENA communication layer struct @@ -608,9 +613,8 @@ int ena_com_set_dev_mtu(struct ena_com_dev *ena_dev, int mtu); * * @return: 0 on Success and negative value otherwise. */ -int -ena_com_get_offload_settings(struct ena_com_dev *ena_dev, - struct ena_admin_feature_offload_desc *offload); +int ena_com_get_offload_settings(struct ena_com_dev *ena_dev, + struct ena_admin_feature_offload_desc *offload); /* ena_com_rss_init - Init RSS * @ena_dev: ENA communication layer struct @@ -765,8 +769,8 @@ int ena_com_indirect_table_set(struct ena_com_dev *ena_dev); * * Retrieve the RSS indirection table from the device. * - * @note: If the caller called ena_com_indirect_table_fill_entry but didn't - * flash it to the device, the new configuration will be lost. + * @note: If the caller called ena_com_indirect_table_fill_entry but didn't flash + * it to the device, the new configuration will be lost. * * @return: 0 on Success and negative value otherwise. */ @@ -874,8 +878,7 @@ bool ena_com_interrupt_moderation_supported(struct ena_com_dev *ena_dev); * moderation table back to the default parameters. * @ena_dev: ENA communication layer struct */ -void -ena_com_config_default_interrupt_moderation_table(struct ena_com_dev *ena_dev); +void ena_com_config_default_interrupt_moderation_table(struct ena_com_dev *ena_dev); /* ena_com_update_nonadaptive_moderation_interval_tx - Update the * non-adaptive interval in Tx direction. @@ -884,9 +887,8 @@ ena_com_config_default_interrupt_moderation_table(struct ena_com_dev *ena_dev); * * @return - 0 on success, negative value on failure. */ -int -ena_com_update_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev, - u32 tx_coalesce_usecs); +int ena_com_update_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev, + u32 tx_coalesce_usecs); /* ena_com_update_nonadaptive_moderation_interval_rx - Update the * non-adaptive interval in Rx direction. @@ -895,9 +897,8 @@ ena_com_update_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev, * * @return - 0 on success, negative value on failure. */ -int -ena_com_update_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev, - u32 rx_coalesce_usecs); +int ena_com_update_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev, + u32 rx_coalesce_usecs); /* ena_com_get_nonadaptive_moderation_interval_tx - Retrieve the * non-adaptive interval in Tx direction. @@ -905,8 +906,7 @@ ena_com_update_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev, * * @return - interval in usec */ -unsigned int -ena_com_get_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev); +unsigned int ena_com_get_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev); /* ena_com_get_nonadaptive_moderation_interval_rx - Retrieve the * non-adaptive interval in Rx direction. @@ -914,8 +914,7 @@ ena_com_get_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev); * * @return - interval in usec */ -unsigned int -ena_com_get_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev); +unsigned int ena_com_get_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev); /* ena_com_init_intr_moderation_entry - Update a single entry in the interrupt * moderation table. @@ -940,20 +939,17 @@ void ena_com_get_intr_moderation_entry(struct ena_com_dev *ena_dev, enum ena_intr_moder_level level, struct ena_intr_moder_entry *entry); -static inline bool -ena_com_get_adaptive_moderation_enabled(struct ena_com_dev *ena_dev) +static inline bool ena_com_get_adaptive_moderation_enabled(struct ena_com_dev *ena_dev) { return ena_dev->adaptive_coalescing; } -static inline void -ena_com_enable_adaptive_moderation(struct ena_com_dev *ena_dev) +static inline void ena_com_enable_adaptive_moderation(struct ena_com_dev *ena_dev) { ena_dev->adaptive_coalescing = true; } -static inline void -ena_com_disable_adaptive_moderation(struct ena_com_dev *ena_dev) +static inline void ena_com_disable_adaptive_moderation(struct ena_com_dev *ena_dev) { ena_dev->adaptive_coalescing = false; } @@ -966,12 +962,11 @@ ena_com_disable_adaptive_moderation(struct ena_com_dev *ena_dev) * @moder_tbl_idx: Current table level as input update new level as return * value. */ -static inline void -ena_com_calculate_interrupt_delay(struct ena_com_dev *ena_dev, - unsigned int pkts, - unsigned int bytes, - unsigned int *smoothed_interval, - unsigned int *moder_tbl_idx) +static inline void ena_com_calculate_interrupt_delay(struct ena_com_dev *ena_dev, + unsigned int pkts, + unsigned int bytes, + unsigned int *smoothed_interval, + unsigned int *moder_tbl_idx) { enum ena_intr_moder_level curr_moder_idx, new_moder_idx; struct ena_intr_moder_entry *curr_moder_entry; @@ -1001,17 +996,20 @@ ena_com_calculate_interrupt_delay(struct ena_com_dev *ena_dev, if (curr_moder_idx == ENA_INTR_MODER_LOWEST) { if ((pkts > curr_moder_entry->pkts_per_interval) || (bytes > curr_moder_entry->bytes_per_interval)) - new_moder_idx = (enum ena_intr_moder_level)(curr_moder_idx + 1); + new_moder_idx = + (enum ena_intr_moder_level)(curr_moder_idx + ENA_INTR_MODER_LEVEL_STRIDE); } else { - pred_moder_entry = &intr_moder_tbl[curr_moder_idx - 1]; + pred_moder_entry = &intr_moder_tbl[curr_moder_idx - ENA_INTR_MODER_LEVEL_STRIDE]; if ((pkts <= pred_moder_entry->pkts_per_interval) || (bytes <= pred_moder_entry->bytes_per_interval)) - new_moder_idx = (enum ena_intr_moder_level)(curr_moder_idx - 1); + new_moder_idx = + (enum ena_intr_moder_level)(curr_moder_idx - ENA_INTR_MODER_LEVEL_STRIDE); else if ((pkts > curr_moder_entry->pkts_per_interval) || (bytes > curr_moder_entry->bytes_per_interval)) { if (curr_moder_idx != ENA_INTR_MODER_HIGHEST) - new_moder_idx = (enum ena_intr_moder_level)(curr_moder_idx + 1); + new_moder_idx = + (enum ena_intr_moder_level)(curr_moder_idx + ENA_INTR_MODER_LEVEL_STRIDE); } } new_moder_entry = &intr_moder_tbl[new_moder_idx]; @@ -1044,18 +1042,12 @@ static inline void ena_com_update_intr_reg(struct ena_eth_io_intr_reg *intr_reg, intr_reg->intr_control |= (tx_delay_interval << ENA_ETH_IO_INTR_REG_TX_INTR_DELAY_SHIFT) - & ENA_ETH_IO_INTR_REG_RX_INTR_DELAY_MASK; + & ENA_ETH_IO_INTR_REG_TX_INTR_DELAY_MASK; if (unmask) intr_reg->intr_control |= ENA_ETH_IO_INTR_REG_INTR_UNMASK_MASK; } -int ena_com_get_dev_extended_stats(struct ena_com_dev *ena_dev, char *buff, - u32 len); - -int ena_com_extended_stats_set_func_queue(struct ena_com_dev *ena_dev, - u32 funct_queue); - #if defined(__cplusplus) } #endif /* __cplusplus */ diff --git a/drivers/net/ena/base/ena_defs/ena_admin_defs.h b/drivers/net/ena/base/ena_defs/ena_admin_defs.h index 7a031d903..04d4e9a59 100644 --- a/drivers/net/ena/base/ena_defs/ena_admin_defs.h +++ b/drivers/net/ena/base/ena_defs/ena_admin_defs.h @@ -34,174 +34,140 @@ #ifndef _ENA_ADMIN_H_ #define _ENA_ADMIN_H_ -/* admin commands opcodes */ enum ena_admin_aq_opcode { - /* create submission queue */ - ENA_ADMIN_CREATE_SQ = 1, + ENA_ADMIN_CREATE_SQ = 1, - /* destroy submission queue */ - ENA_ADMIN_DESTROY_SQ = 2, + ENA_ADMIN_DESTROY_SQ = 2, - /* create completion queue */ - ENA_ADMIN_CREATE_CQ = 3, + ENA_ADMIN_CREATE_CQ = 3, - /* destroy completion queue */ - ENA_ADMIN_DESTROY_CQ = 4, + ENA_ADMIN_DESTROY_CQ = 4, - /* get capabilities of particular feature */ - ENA_ADMIN_GET_FEATURE = 8, + ENA_ADMIN_GET_FEATURE = 8, - /* get capabilities of particular feature */ - ENA_ADMIN_SET_FEATURE = 9, + ENA_ADMIN_SET_FEATURE = 9, - /* get statistics */ - ENA_ADMIN_GET_STATS = 11, + ENA_ADMIN_GET_STATS = 11, }; -/* admin command completion status codes */ enum ena_admin_aq_completion_status { - /* Request completed successfully */ - ENA_ADMIN_SUCCESS = 0, + ENA_ADMIN_SUCCESS = 0, - /* no resources to satisfy request */ - ENA_ADMIN_RESOURCE_ALLOCATION_FAILURE = 1, + ENA_ADMIN_RESOURCE_ALLOCATION_FAILURE = 1, - /* Bad opcode in request descriptor */ - ENA_ADMIN_BAD_OPCODE = 2, + ENA_ADMIN_BAD_OPCODE = 2, - /* Unsupported opcode in request descriptor */ - ENA_ADMIN_UNSUPPORTED_OPCODE = 3, + ENA_ADMIN_UNSUPPORTED_OPCODE = 3, - /* Wrong request format */ - ENA_ADMIN_MALFORMED_REQUEST = 4, + ENA_ADMIN_MALFORMED_REQUEST = 4, - /* One of parameters is not valid. Provided in ACQ entry - * extended_status - */ - ENA_ADMIN_ILLEGAL_PARAMETER = 5, + /* Additional status is provided in ACQ entry extended_status */ + ENA_ADMIN_ILLEGAL_PARAMETER = 5, - /* unexpected error */ - ENA_ADMIN_UNKNOWN_ERROR = 6, + ENA_ADMIN_UNKNOWN_ERROR = 6, }; -/* get/set feature subcommands opcodes */ enum ena_admin_aq_feature_id { - /* list of all supported attributes/capabilities in the ENA */ - ENA_ADMIN_DEVICE_ATTRIBUTES = 1, + ENA_ADMIN_DEVICE_ATTRIBUTES = 1, + + ENA_ADMIN_MAX_QUEUES_NUM = 2, - /* max number of supported queues per for every queues type */ - ENA_ADMIN_MAX_QUEUES_NUM = 2, + ENA_ADMIN_HW_HINTS = 3, - /* Receive Side Scaling (RSS) function */ - ENA_ADMIN_RSS_HASH_FUNCTION = 10, + ENA_ADMIN_RSS_HASH_FUNCTION = 10, - /* stateless TCP/UDP/IP offload capabilities. */ - ENA_ADMIN_STATELESS_OFFLOAD_CONFIG = 11, + ENA_ADMIN_STATELESS_OFFLOAD_CONFIG = 11, - /* Multiple tuples flow table configuration */ - ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG = 12, + ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG = 12, - /* max MTU, current MTU */ - ENA_ADMIN_MTU = 14, + ENA_ADMIN_MTU = 14, - /* Receive Side Scaling (RSS) hash input */ - ENA_ADMIN_RSS_HASH_INPUT = 18, + ENA_ADMIN_RSS_HASH_INPUT = 18, - /* interrupt moderation parameters */ - ENA_ADMIN_INTERRUPT_MODERATION = 20, + ENA_ADMIN_INTERRUPT_MODERATION = 20, - /* AENQ configuration */ - ENA_ADMIN_AENQ_CONFIG = 26, + ENA_ADMIN_AENQ_CONFIG = 26, - /* Link configuration */ - ENA_ADMIN_LINK_CONFIG = 27, + ENA_ADMIN_LINK_CONFIG = 27, - /* Host attributes configuration */ - ENA_ADMIN_HOST_ATTR_CONFIG = 28, + ENA_ADMIN_HOST_ATTR_CONFIG = 28, - /* Number of valid opcodes */ - ENA_ADMIN_FEATURES_OPCODE_NUM = 32, + ENA_ADMIN_FEATURES_OPCODE_NUM = 32, }; -/* descriptors and headers placement */ enum ena_admin_placement_policy_type { - /* descriptors and headers are in OS memory */ - ENA_ADMIN_PLACEMENT_POLICY_HOST = 1, + /* descriptors and headers are in host memory */ + ENA_ADMIN_PLACEMENT_POLICY_HOST = 1, - /* descriptors and headers in device memory (a.k.a Low Latency + /* descriptors and headers are in device memory (a.k.a Low Latency * Queue) */ - ENA_ADMIN_PLACEMENT_POLICY_DEV = 3, + ENA_ADMIN_PLACEMENT_POLICY_DEV = 3, }; -/* link speeds */ enum ena_admin_link_types { - ENA_ADMIN_LINK_SPEED_1G = 0x1, + ENA_ADMIN_LINK_SPEED_1G = 0x1, - ENA_ADMIN_LINK_SPEED_2_HALF_G = 0x2, + ENA_ADMIN_LINK_SPEED_2_HALF_G = 0x2, - ENA_ADMIN_LINK_SPEED_5G = 0x4, + ENA_ADMIN_LINK_SPEED_5G = 0x4, - ENA_ADMIN_LINK_SPEED_10G = 0x8, + ENA_ADMIN_LINK_SPEED_10G = 0x8, - ENA_ADMIN_LINK_SPEED_25G = 0x10, + ENA_ADMIN_LINK_SPEED_25G = 0x10, - ENA_ADMIN_LINK_SPEED_40G = 0x20, + ENA_ADMIN_LINK_SPEED_40G = 0x20, - ENA_ADMIN_LINK_SPEED_50G = 0x40, + ENA_ADMIN_LINK_SPEED_50G = 0x40, - ENA_ADMIN_LINK_SPEED_100G = 0x80, + ENA_ADMIN_LINK_SPEED_100G = 0x80, - ENA_ADMIN_LINK_SPEED_200G = 0x100, + ENA_ADMIN_LINK_SPEED_200G = 0x100, - ENA_ADMIN_LINK_SPEED_400G = 0x200, + ENA_ADMIN_LINK_SPEED_400G = 0x200, }; -/* completion queue update policy */ enum ena_admin_completion_policy_type { - /* cqe for each sq descriptor */ - ENA_ADMIN_COMPLETION_POLICY_DESC = 0, + /* completion queue entry for each sq descriptor */ + ENA_ADMIN_COMPLETION_POLICY_DESC = 0, - /* cqe upon request in sq descriptor */ - ENA_ADMIN_COMPLETION_POLICY_DESC_ON_DEMAND = 1, + /* completion queue entry upon request in sq descriptor */ + ENA_ADMIN_COMPLETION_POLICY_DESC_ON_DEMAND = 1, /* current queue head pointer is updated in OS memory upon sq * descriptor request */ - ENA_ADMIN_COMPLETION_POLICY_HEAD_ON_DEMAND = 2, + ENA_ADMIN_COMPLETION_POLICY_HEAD_ON_DEMAND = 2, /* current queue head pointer is updated in OS memory for each sq * descriptor */ - ENA_ADMIN_COMPLETION_POLICY_HEAD = 3, + ENA_ADMIN_COMPLETION_POLICY_HEAD = 3, }; -/* type of get statistics command */ +/* basic stats return ena_admin_basic_stats while extanded stats return a + * buffer (string format) with additional statistics per queue and per + * device id + */ enum ena_admin_get_stats_type { - /* Basic statistics */ - ENA_ADMIN_GET_STATS_TYPE_BASIC = 0, + ENA_ADMIN_GET_STATS_TYPE_BASIC = 0, - /* Extended statistics */ - ENA_ADMIN_GET_STATS_TYPE_EXTENDED = 1, + ENA_ADMIN_GET_STATS_TYPE_EXTENDED = 1, }; -/* scope of get statistics command */ enum ena_admin_get_stats_scope { - ENA_ADMIN_SPECIFIC_QUEUE = 0, + ENA_ADMIN_SPECIFIC_QUEUE = 0, - ENA_ADMIN_ETH_TRAFFIC = 1, + ENA_ADMIN_ETH_TRAFFIC = 1, }; -/* ENA Admin Queue (AQ) common descriptor */ struct ena_admin_aq_common_desc { - /* word 0 : */ - /* command identificator to associate it with the completion - * 11:0 : command_id + /* 11:0 : command_id * 15:12 : reserved12 */ uint16_t command_id; - /* as appears in ena_aq_opcode */ + /* as appears in ena_admin_aq_opcode */ uint8_t opcode; /* 0 : phase @@ -214,24 +180,17 @@ struct ena_admin_aq_common_desc { uint8_t flags; }; -/* used in ena_aq_entry. Can point directly to control data, or to a page - * list chunk. Used also at the end of indirect mode page list chunks, for - * chaining. +/* used in ena_admin_aq_entry. Can point directly to control data, or to a + * page list chunk. Used also at the end of indirect mode page list chunks, + * for chaining. */ struct ena_admin_ctrl_buff_info { - /* word 0 : indicates length of the buffer pointed by - * control_buffer_address. - */ uint32_t length; - /* words 1:2 : points to control buffer (direct or indirect) */ struct ena_common_mem_addr address; }; -/* submission queue full identification */ struct ena_admin_sq { - /* word 0 : */ - /* queue id */ uint16_t sq_idx; /* 4:0 : reserved @@ -242,36 +201,25 @@ struct ena_admin_sq { uint8_t reserved1; }; -/* AQ entry format */ struct ena_admin_aq_entry { - /* words 0 : */ struct ena_admin_aq_common_desc aq_common_descriptor; - /* words 1:3 : */ union { - /* command specific inline data */ uint32_t inline_data_w1[3]; - /* words 1:3 : points to control buffer (direct or - * indirect, chained if needed) - */ struct ena_admin_ctrl_buff_info control_buffer; } u; - /* command specific inline data */ uint32_t inline_data_w4[12]; }; -/* ENA Admin Completion Queue (ACQ) common descriptor */ struct ena_admin_acq_common_desc { - /* word 0 : */ /* command identifier to associate it with the aq descriptor * 11:0 : command_id * 15:12 : reserved12 */ uint16_t command; - /* status of request execution */ uint8_t status; /* 0 : phase @@ -279,33 +227,21 @@ struct ena_admin_acq_common_desc { */ uint8_t flags; - /* word 1 : */ - /* provides additional info */ uint16_t extended_status; - /* submission queue head index, serves as a hint what AQ entries can - * be revoked - */ + /* serves as a hint what AQ entries can be revoked */ uint16_t sq_head_indx; }; -/* ACQ entry format */ struct ena_admin_acq_entry { - /* words 0:1 : */ struct ena_admin_acq_common_desc acq_common_descriptor; - /* response type specific data */ uint32_t response_specific_data[14]; }; -/* ENA AQ Create Submission Queue command. Placed in control buffer pointed - * by AQ entry - */ struct ena_admin_aq_create_sq_cmd { - /* words 0 : */ struct ena_admin_aq_common_desc aq_common_descriptor; - /* word 1 : */ /* 4:0 : reserved0_w1 * 7:5 : sq_direction - 0x1 - Tx, 0x2 - Rx */ @@ -337,7 +273,6 @@ struct ena_admin_aq_create_sq_cmd { */ uint8_t sq_caps_3; - /* word 2 : */ /* associated completion queue id. This CQ must be created prior to * SQ creation */ @@ -346,85 +281,62 @@ struct ena_admin_aq_create_sq_cmd { /* submission queue depth in entries */ uint16_t sq_depth; - /* words 3:4 : SQ physical base address in OS memory. This field - * should not be used for Low Latency queues. Has to be page - * aligned. + /* SQ physical base address in OS memory. This field should not be + * used for Low Latency queues. Has to be page aligned. */ struct ena_common_mem_addr sq_ba; - /* words 5:6 : specifies queue head writeback location in OS - * memory. Valid if completion_policy is set to - * completion_policy_head_on_demand or completion_policy_head. Has - * to be cache aligned + /* specifies queue head writeback location in OS memory. Valid if + * completion_policy is set to completion_policy_head_on_demand or + * completion_policy_head. Has to be cache aligned */ struct ena_common_mem_addr sq_head_writeback; - /* word 7 : reserved word */ uint32_t reserved0_w7; - /* word 8 : reserved word */ uint32_t reserved0_w8; }; -/* submission queue direction */ enum ena_admin_sq_direction { - ENA_ADMIN_SQ_DIRECTION_TX = 1, + ENA_ADMIN_SQ_DIRECTION_TX = 1, - ENA_ADMIN_SQ_DIRECTION_RX = 2, + ENA_ADMIN_SQ_DIRECTION_RX = 2, }; -/* ENA Response for Create SQ Command. Appears in ACQ entry as - * response_specific_data - */ struct ena_admin_acq_create_sq_resp_desc { - /* words 0:1 : Common Admin Queue completion descriptor */ struct ena_admin_acq_common_desc acq_common_desc; - /* word 2 : */ - /* sq identifier */ uint16_t sq_idx; uint16_t reserved; - /* word 3 : queue doorbell address as an offset to PCIe MMIO REG BAR */ + /* queue doorbell address as an offset to PCIe MMIO REG BAR */ uint32_t sq_doorbell_offset; - /* word 4 : low latency queue ring base address as an offset to - * PCIe MMIO LLQ_MEM BAR + /* low latency queue ring base address as an offset to PCIe MMIO + * LLQ_MEM BAR */ uint32_t llq_descriptors_offset; - /* word 5 : low latency queue headers' memory as an offset to PCIe - * MMIO LLQ_MEM BAR + /* low latency queue headers' memory as an offset to PCIe MMIO + * LLQ_MEM BAR */ uint32_t llq_headers_offset; }; -/* ENA AQ Destroy Submission Queue command. Placed in control buffer - * pointed by AQ entry - */ struct ena_admin_aq_destroy_sq_cmd { - /* words 0 : */ struct ena_admin_aq_common_desc aq_common_descriptor; - /* words 1 : */ struct ena_admin_sq sq; }; -/* ENA Response for Destroy SQ Command. Appears in ACQ entry as - * response_specific_data - */ struct ena_admin_acq_destroy_sq_resp_desc { - /* words 0:1 : Common Admin Queue completion descriptor */ struct ena_admin_acq_common_desc acq_common_desc; }; -/* ENA AQ Create Completion Queue command */ struct ena_admin_aq_create_cq_cmd { - /* words 0 : */ struct ena_admin_aq_common_desc aq_common_descriptor; - /* word 1 : */ /* 4:0 : reserved5 * 5 : interrupt_mode_enabled - if set, cq operates * in interrupt mode, otherwise - polling @@ -441,62 +353,39 @@ struct ena_admin_aq_create_cq_cmd { /* completion queue depth in # of entries. must be power of 2 */ uint16_t cq_depth; - /* word 2 : msix vector assigned to this cq */ + /* msix vector assigned to this cq */ uint32_t msix_vector; - /* words 3:4 : cq physical base address in OS memory. CQ must be - * physically contiguous + /* cq physical base address in OS memory. CQ must be physically + * contiguous */ struct ena_common_mem_addr cq_ba; }; -/* ENA Response for Create CQ Command. Appears in ACQ entry as response - * specific data - */ struct ena_admin_acq_create_cq_resp_desc { - /* words 0:1 : Common Admin Queue completion descriptor */ struct ena_admin_acq_common_desc acq_common_desc; - /* word 2 : */ - /* cq identifier */ uint16_t cq_idx; - /* actual cq depth in # of entries */ + /* actual cq depth in number of entries */ uint16_t cq_actual_depth; - /* word 3 : cpu numa node address as an offset to PCIe MMIO REG BAR */ uint32_t numa_node_register_offset; - /* word 4 : completion head doorbell address as an offset to PCIe - * MMIO REG BAR - */ uint32_t cq_head_db_register_offset; - /* word 5 : interrupt unmask register address as an offset into - * PCIe MMIO REG BAR - */ uint32_t cq_interrupt_unmask_register_offset; }; -/* ENA AQ Destroy Completion Queue command. Placed in control buffer - * pointed by AQ entry - */ struct ena_admin_aq_destroy_cq_cmd { - /* words 0 : */ struct ena_admin_aq_common_desc aq_common_descriptor; - /* word 1 : */ - /* associated queue id. */ uint16_t cq_idx; uint16_t reserved1; }; -/* ENA Response for Destroy CQ Command. Appears in ACQ entry as - * response_specific_data - */ struct ena_admin_acq_destroy_cq_resp_desc { - /* words 0:1 : Common Admin Queue completion descriptor */ struct ena_admin_acq_common_desc acq_common_desc; }; @@ -504,21 +393,15 @@ struct ena_admin_acq_destroy_cq_resp_desc { * buffer pointed by AQ entry */ struct ena_admin_aq_get_stats_cmd { - /* words 0 : */ struct ena_admin_aq_common_desc aq_common_descriptor; - /* words 1:3 : */ union { /* command specific inline data */ uint32_t inline_data_w1[3]; - /* words 1:3 : points to control buffer (direct or - * indirect, chained if needed) - */ struct ena_admin_ctrl_buff_info control_buffer; } u; - /* word 4 : */ /* stats type as defined in enum ena_admin_get_stats_type */ uint8_t type; @@ -527,7 +410,6 @@ struct ena_admin_aq_get_stats_cmd { uint16_t reserved3; - /* word 5 : */ /* queue id. used when scope is specific_queue */ uint16_t queue_idx; @@ -539,89 +421,60 @@ struct ena_admin_aq_get_stats_cmd { /* Basic Statistics Command. */ struct ena_admin_basic_stats { - /* word 0 : */ uint32_t tx_bytes_low; - /* word 1 : */ uint32_t tx_bytes_high; - /* word 2 : */ uint32_t tx_pkts_low; - /* word 3 : */ uint32_t tx_pkts_high; - /* word 4 : */ uint32_t rx_bytes_low; - /* word 5 : */ uint32_t rx_bytes_high; - /* word 6 : */ uint32_t rx_pkts_low; - /* word 7 : */ uint32_t rx_pkts_high; - /* word 8 : */ uint32_t rx_drops_low; - /* word 9 : */ uint32_t rx_drops_high; }; -/* ENA Response for Get Statistics Command. Appears in ACQ entry as - * response_specific_data - */ struct ena_admin_acq_get_stats_resp { - /* words 0:1 : Common Admin Queue completion descriptor */ struct ena_admin_acq_common_desc acq_common_desc; - /* words 2:11 : */ struct ena_admin_basic_stats basic_stats; }; -/* ENA Get/Set Feature common descriptor. Appears as inline word in - * ena_aq_entry - */ struct ena_admin_get_set_feature_common_desc { - /* word 0 : */ /* 1:0 : select - 0x1 - current value; 0x3 - default * value * 7:3 : reserved3 */ uint8_t flags; - /* as appears in ena_feature_id */ + /* as appears in ena_admin_aq_feature_id */ uint8_t feature_id; - /* reserved16 */ uint16_t reserved16; }; -/* ENA Device Attributes Feature descriptor. */ struct ena_admin_device_attr_feature_desc { - /* word 0 : implementation id */ uint32_t impl_id; - /* word 1 : device version */ uint32_t device_version; - /* word 2 : bit map of which bits are supported value of 1 - * indicated that this feature is supported and can perform SET/GET - * for it - */ + /* bitmap of ena_admin_aq_feature_id */ uint32_t supported_features; - /* word 3 : */ uint32_t reserved3; - /* word 4 : Indicates how many bits are used physical address - * access. - */ + /* Indicates how many bits are used physical address access. */ uint32_t phys_addr_width; - /* word 5 : Indicates how many bits are used virtual address access. */ + /* Indicates how many bits are used virtual address access. */ uint32_t virt_addr_width; /* unicast MAC address (in Network byte order) */ @@ -629,36 +482,27 @@ struct ena_admin_device_attr_feature_desc { uint8_t reserved7[2]; - /* word 8 : Max supported MTU value */ uint32_t max_mtu; }; -/* ENA Max Queues Feature descriptor. */ struct ena_admin_queue_feature_desc { - /* word 0 : Max number of submission queues (including LLQs) */ + /* including LLQs */ uint32_t max_sq_num; - /* word 1 : Max submission queue depth */ uint32_t max_sq_depth; - /* word 2 : Max number of completion queues */ uint32_t max_cq_num; - /* word 3 : Max completion queue depth */ uint32_t max_cq_depth; - /* word 4 : Max number of LLQ submission queues */ uint32_t max_llq_num; - /* word 5 : Max submission queue depth of LLQ */ uint32_t max_llq_depth; - /* word 6 : Max header size */ uint32_t max_header_size; - /* word 7 : */ - /* Maximum Descriptors number, including meta descriptors, allowed - * for a single Tx packet + /* Maximum Descriptors number, including meta descriptor, allowed for + * a single Tx packet */ uint16_t max_packet_tx_descs; @@ -666,86 +510,69 @@ struct ena_admin_queue_feature_desc { uint16_t max_packet_rx_descs; }; -/* ENA MTU Set Feature descriptor. */ struct ena_admin_set_feature_mtu_desc { - /* word 0 : mtu payload size (exclude L2) */ + /* exclude L2 */ uint32_t mtu; }; -/* ENA host attributes Set Feature descriptor. */ struct ena_admin_set_feature_host_attr_desc { - /* words 0:1 : host OS info base address in OS memory. host info is - * 4KB of physically contiguous + /* host OS info base address in OS memory. host info is 4KB of + * physically contiguous */ struct ena_common_mem_addr os_info_ba; - /* words 2:3 : host debug area base address in OS memory. debug - * area must be physically contiguous + /* host debug area base address in OS memory. debug area must be + * physically contiguous */ struct ena_common_mem_addr debug_ba; - /* word 4 : debug area size */ + /* debug area size */ uint32_t debug_area_size; }; -/* ENA Interrupt Moderation Get Feature descriptor. */ struct ena_admin_feature_intr_moder_desc { - /* word 0 : */ /* interrupt delay granularity in usec */ uint16_t intr_delay_resolution; uint16_t reserved; }; -/* ENA Link Get Feature descriptor. */ struct ena_admin_get_feature_link_desc { - /* word 0 : Link speed in Mb */ + /* Link speed in Mb */ uint32_t speed; - /* word 1 : supported speeds (bit field of enum ena_admin_link - * types) - */ + /* bit field of enum ena_admin_link types */ uint32_t supported; - /* word 2 : */ - /* 0 : autoneg - auto negotiation + /* 0 : autoneg * 1 : duplex - Full Duplex * 31:2 : reserved2 */ uint32_t flags; }; -/* ENA AENQ Feature descriptor. */ struct ena_admin_feature_aenq_desc { - /* word 0 : bitmask for AENQ groups the device can report */ + /* bitmask for AENQ groups the device can report */ uint32_t supported_groups; - /* word 1 : bitmask for AENQ groups to report */ + /* bitmask for AENQ groups to report */ uint32_t enabled_groups; }; -/* ENA Stateless Offload Feature descriptor. */ struct ena_admin_feature_offload_desc { - /* word 0 : */ - /* Trasmit side stateless offload - * 0 : TX_L3_csum_ipv4 - IPv4 checksum - * 1 : TX_L4_ipv4_csum_part - TCP/UDP over IPv4 - * checksum, the checksum field should be initialized - * with pseudo header checksum - * 2 : TX_L4_ipv4_csum_full - TCP/UDP over IPv4 - * checksum - * 3 : TX_L4_ipv6_csum_part - TCP/UDP over IPv6 - * checksum, the checksum field should be initialized - * with pseudo header checksum - * 4 : TX_L4_ipv6_csum_full - TCP/UDP over IPv6 - * checksum - * 5 : tso_ipv4 - TCP/IPv4 Segmentation Offloading - * 6 : tso_ipv6 - TCP/IPv6 Segmentation Offloading - * 7 : tso_ecn - TCP Segmentation with ECN + /* 0 : TX_L3_csum_ipv4 + * 1 : TX_L4_ipv4_csum_part - The checksum field + * should be initialized with pseudo header checksum + * 2 : TX_L4_ipv4_csum_full + * 3 : TX_L4_ipv6_csum_part - The checksum field + * should be initialized with pseudo header checksum + * 4 : TX_L4_ipv6_csum_full + * 5 : tso_ipv4 + * 6 : tso_ipv6 + * 7 : tso_ecn */ uint32_t tx; - /* word 1 : */ /* Receive side supported stateless offload * 0 : RX_L3_csum_ipv4 - IPv4 checksum * 1 : RX_L4_ipv4_csum - TCP/UDP/IPv4 checksum @@ -754,118 +581,94 @@ struct ena_admin_feature_offload_desc { */ uint32_t rx_supported; - /* word 2 : */ - /* Receive side enabled stateless offload */ uint32_t rx_enabled; }; -/* hash functions */ enum ena_admin_hash_functions { - /* Toeplitz hash */ - ENA_ADMIN_TOEPLITZ = 1, + ENA_ADMIN_TOEPLITZ = 1, - /* CRC32 hash */ - ENA_ADMIN_CRC32 = 2, + ENA_ADMIN_CRC32 = 2, }; -/* ENA RSS flow hash control buffer structure */ struct ena_admin_feature_rss_flow_hash_control { - /* word 0 : number of valid keys */ uint32_t keys_num; - /* word 1 : */ uint32_t reserved; - /* Toeplitz keys */ uint32_t key[10]; }; -/* ENA RSS Flow Hash Function */ struct ena_admin_feature_rss_flow_hash_function { - /* word 0 : */ - /* supported hash functions - * 7:0 : funcs - supported hash functions (bitmask - * accroding to ena_admin_hash_functions) - */ + /* 7:0 : funcs - bitmask of ena_admin_hash_functions */ uint32_t supported_func; - /* word 1 : */ - /* selected hash func - * 7:0 : selected_func - selected hash function - * (bitmask accroding to ena_admin_hash_functions) + /* 7:0 : selected_func - bitmask of + * ena_admin_hash_functions */ uint32_t selected_func; - /* word 2 : initial value */ + /* initial value */ uint32_t init_val; }; /* RSS flow hash protocols */ enum ena_admin_flow_hash_proto { - /* tcp/ipv4 */ - ENA_ADMIN_RSS_TCP4 = 0, + ENA_ADMIN_RSS_TCP4 = 0, - /* udp/ipv4 */ - ENA_ADMIN_RSS_UDP4 = 1, + ENA_ADMIN_RSS_UDP4 = 1, - /* tcp/ipv6 */ - ENA_ADMIN_RSS_TCP6 = 2, + ENA_ADMIN_RSS_TCP6 = 2, - /* udp/ipv6 */ - ENA_ADMIN_RSS_UDP6 = 3, + ENA_ADMIN_RSS_UDP6 = 3, - /* ipv4 not tcp/udp */ - ENA_ADMIN_RSS_IP4 = 4, + ENA_ADMIN_RSS_IP4 = 4, - /* ipv6 not tcp/udp */ - ENA_ADMIN_RSS_IP6 = 5, + ENA_ADMIN_RSS_IP6 = 5, - /* fragmented ipv4 */ - ENA_ADMIN_RSS_IP4_FRAG = 6, + ENA_ADMIN_RSS_IP4_FRAG = 6, - /* not ipv4/6 */ - ENA_ADMIN_RSS_NOT_IP = 7, + ENA_ADMIN_RSS_NOT_IP = 7, - /* max number of protocols */ - ENA_ADMIN_RSS_PROTO_NUM = 16, + /* TCPv6 with extension header */ + ENA_ADMIN_RSS_TCP6_EX = 8, + + /* IPv6 with extension header */ + ENA_ADMIN_RSS_IP6_EX = 9, + + ENA_ADMIN_RSS_PROTO_NUM = 16, }; /* RSS flow hash fields */ enum ena_admin_flow_hash_fields { /* Ethernet Dest Addr */ - ENA_ADMIN_RSS_L2_DA = 0, + ENA_ADMIN_RSS_L2_DA = BIT(0), /* Ethernet Src Addr */ - ENA_ADMIN_RSS_L2_SA = 1, + ENA_ADMIN_RSS_L2_SA = BIT(1), /* ipv4/6 Dest Addr */ - ENA_ADMIN_RSS_L3_DA = 2, + ENA_ADMIN_RSS_L3_DA = BIT(2), /* ipv4/6 Src Addr */ - ENA_ADMIN_RSS_L3_SA = 5, + ENA_ADMIN_RSS_L3_SA = BIT(3), /* tcp/udp Dest Port */ - ENA_ADMIN_RSS_L4_DP = 6, + ENA_ADMIN_RSS_L4_DP = BIT(4), /* tcp/udp Src Port */ - ENA_ADMIN_RSS_L4_SP = 7, + ENA_ADMIN_RSS_L4_SP = BIT(5), }; -/* hash input fields for flow protocol */ struct ena_admin_proto_input { - /* word 0 : */ /* flow hash fields (bitwise according to ena_admin_flow_hash_fields) */ uint16_t fields; uint16_t reserved2; }; -/* ENA RSS hash control buffer structure */ struct ena_admin_feature_rss_hash_control { - /* supported input fields */ struct ena_admin_proto_input supported_fields[ENA_ADMIN_RSS_PROTO_NUM]; - /* selected input fields */ struct ena_admin_proto_input selected_fields[ENA_ADMIN_RSS_PROTO_NUM]; struct ena_admin_proto_input reserved2[ENA_ADMIN_RSS_PROTO_NUM]; @@ -873,11 +676,9 @@ struct ena_admin_feature_rss_hash_control { struct ena_admin_proto_input reserved3[ENA_ADMIN_RSS_PROTO_NUM]; }; -/* ENA RSS flow hash input */ struct ena_admin_feature_rss_flow_hash_input { - /* word 0 : */ /* supported hash input sorting - * 1 : L3_sort - support swap L3 addresses if DA + * 1 : L3_sort - support swap L3 addresses if DA is * smaller than SA * 2 : L4_sort - support swap L4 ports if DP smaller * SP @@ -893,46 +694,37 @@ struct ena_admin_feature_rss_flow_hash_input { uint16_t enabled_input_sort; }; -/* Operating system type */ enum ena_admin_os_type { - /* Linux OS */ - ENA_ADMIN_OS_LINUX = 1, + ENA_ADMIN_OS_LINUX = 1, - /* Windows OS */ - ENA_ADMIN_OS_WIN = 2, + ENA_ADMIN_OS_WIN = 2, - /* DPDK OS */ - ENA_ADMIN_OS_DPDK = 3, + ENA_ADMIN_OS_DPDK = 3, - /* FreeBSD OS */ - ENA_ADMIN_OS_FREEBSD = 4, + ENA_ADMIN_OS_FREEBSD = 4, - /* PXE OS */ - ENA_ADMIN_OS_IPXE = 5, + ENA_ADMIN_OS_IPXE = 5, }; -/* host info */ struct ena_admin_host_info { - /* word 0 : OS type defined in enum ena_os_type */ + /* defined in enum ena_admin_os_type */ uint32_t os_type; /* os distribution string format */ uint8_t os_dist_str[128]; - /* word 33 : OS distribution numeric format */ + /* OS distribution numeric format */ uint32_t os_dist; /* kernel version string format */ uint8_t kernel_ver_str[32]; - /* word 42 : Kernel version numeric format */ + /* Kernel version numeric format */ uint32_t kernel_ver; - /* word 43 : */ - /* driver version - * 7:0 : major - major - * 15:8 : minor - minor - * 23:16 : sub_minor - sub minor + /* 7:0 : major + * 15:8 : minor + * 23:16 : sub_minor */ uint32_t driver_version; @@ -940,220 +732,200 @@ struct ena_admin_host_info { uint32_t supported_network_features[4]; }; -/* ENA RSS indirection table entry */ struct ena_admin_rss_ind_table_entry { - /* word 0 : */ - /* cq identifier */ uint16_t cq_idx; uint16_t reserved; }; -/* ENA RSS indirection table */ struct ena_admin_feature_rss_ind_table { - /* word 0 : */ /* min supported table size (2^min_size) */ uint16_t min_size; /* max supported table size (2^max_size) */ uint16_t max_size; - /* word 1 : */ /* table size (2^size) */ uint16_t size; uint16_t reserved; - /* word 2 : index of the inline entry. 0xFFFFFFFF means invalid */ + /* index of the inline entry. 0xFFFFFFFF means invalid */ uint32_t inline_index; - /* words 3 : used for updating single entry, ignored when setting - * the entire table through the control buffer. + /* used for updating single entry, ignored when setting the entire + * table through the control buffer. */ struct ena_admin_rss_ind_table_entry inline_entry; }; -/* ENA Get Feature command */ +/* When hint value is 0, driver should use it's own predefined value */ +struct ena_admin_ena_hw_hints { + /* value in ms */ + uint16_t mmio_read_timeout; + + /* value in ms */ + uint16_t driver_watchdog_timeout; + + /* Per packet tx completion timeout. value in ms */ + uint16_t missing_tx_completion_timeout; + + uint16_t missed_tx_completion_count_threshold_to_reset; + + /* value in ms */ + uint16_t admin_completion_tx_timeout; + + uint16_t netdev_wd_timeout; + + uint16_t max_tx_sgl_size; + + uint16_t max_rx_sgl_size; + + uint16_t reserved[8]; +}; + struct ena_admin_get_feat_cmd { - /* words 0 : */ struct ena_admin_aq_common_desc aq_common_descriptor; - /* words 1:3 : points to control buffer (direct or indirect, - * chained if needed) - */ struct ena_admin_ctrl_buff_info control_buffer; - /* words 4 : */ struct ena_admin_get_set_feature_common_desc feat_common; - /* words 5:15 : */ - union { - /* raw words */ - uint32_t raw[11]; - } u; + uint32_t raw[11]; }; -/* ENA Get Feature command response */ struct ena_admin_get_feat_resp { - /* words 0:1 : */ struct ena_admin_acq_common_desc acq_common_desc; - /* words 2:15 : */ union { - /* raw words */ uint32_t raw[14]; - /* words 2:10 : Get Device Attributes */ struct ena_admin_device_attr_feature_desc dev_attr; - /* words 2:5 : Max queues num */ struct ena_admin_queue_feature_desc max_queue; - /* words 2:3 : AENQ configuration */ struct ena_admin_feature_aenq_desc aenq; - /* words 2:4 : Get Link configuration */ struct ena_admin_get_feature_link_desc link; - /* words 2:4 : offload configuration */ struct ena_admin_feature_offload_desc offload; - /* words 2:4 : rss flow hash function */ struct ena_admin_feature_rss_flow_hash_function flow_hash_func; - /* words 2 : rss flow hash input */ struct ena_admin_feature_rss_flow_hash_input flow_hash_input; - /* words 2:3 : rss indirection table */ struct ena_admin_feature_rss_ind_table ind_table; - /* words 2 : interrupt moderation configuration */ struct ena_admin_feature_intr_moder_desc intr_moderation; + + struct ena_admin_ena_hw_hints hw_hints; } u; }; -/* ENA Set Feature command */ struct ena_admin_set_feat_cmd { - /* words 0 : */ struct ena_admin_aq_common_desc aq_common_descriptor; - /* words 1:3 : points to control buffer (direct or indirect, - * chained if needed) - */ struct ena_admin_ctrl_buff_info control_buffer; - /* words 4 : */ struct ena_admin_get_set_feature_common_desc feat_common; - /* words 5:15 : */ union { - /* raw words */ uint32_t raw[11]; - /* words 5 : mtu size */ + /* mtu size */ struct ena_admin_set_feature_mtu_desc mtu; - /* words 5:7 : host attributes */ + /* host attributes */ struct ena_admin_set_feature_host_attr_desc host_attr; - /* words 5:6 : AENQ configuration */ + /* AENQ configuration */ struct ena_admin_feature_aenq_desc aenq; - /* words 5:7 : rss flow hash function */ + /* rss flow hash function */ struct ena_admin_feature_rss_flow_hash_function flow_hash_func; - /* words 5 : rss flow hash input */ + /* rss flow hash input */ struct ena_admin_feature_rss_flow_hash_input flow_hash_input; - /* words 5:6 : rss indirection table */ + /* rss indirection table */ struct ena_admin_feature_rss_ind_table ind_table; } u; }; -/* ENA Set Feature command response */ struct ena_admin_set_feat_resp { - /* words 0:1 : */ struct ena_admin_acq_common_desc acq_common_desc; - /* words 2:15 : */ union { - /* raw words */ uint32_t raw[14]; } u; }; -/* ENA Asynchronous Event Notification Queue descriptor. */ struct ena_admin_aenq_common_desc { - /* word 0 : */ uint16_t group; uint16_t syndrom; - /* word 1 : */ /* 0 : phase */ uint8_t flags; uint8_t reserved1[3]; - /* word 2 : Timestamp LSB */ uint32_t timestamp_low; - /* word 3 : Timestamp MSB */ uint32_t timestamp_high; }; /* asynchronous event notification groups */ enum ena_admin_aenq_group { - /* Link State Change */ - ENA_ADMIN_LINK_CHANGE = 0, + ENA_ADMIN_LINK_CHANGE = 0, - ENA_ADMIN_FATAL_ERROR = 1, + ENA_ADMIN_FATAL_ERROR = 1, - ENA_ADMIN_WARNING = 2, + ENA_ADMIN_WARNING = 2, - ENA_ADMIN_NOTIFICATION = 3, + ENA_ADMIN_NOTIFICATION = 3, - ENA_ADMIN_KEEP_ALIVE = 4, + ENA_ADMIN_KEEP_ALIVE = 4, - ENA_ADMIN_AENQ_GROUPS_NUM = 5, + ENA_ADMIN_AENQ_GROUPS_NUM = 5, }; -/* syndorm of AENQ notification group */ enum ena_admin_aenq_notification_syndrom { - ENA_ADMIN_SUSPEND = 0, + ENA_ADMIN_SUSPEND = 0, + + ENA_ADMIN_RESUME = 1, - ENA_ADMIN_RESUME = 1, + ENA_ADMIN_UPDATE_HINTS = 2, }; -/* ENA Asynchronous Event Notification generic descriptor. */ struct ena_admin_aenq_entry { - /* words 0:3 : */ struct ena_admin_aenq_common_desc aenq_common_desc; /* command specific inline data */ uint32_t inline_data_w4[12]; }; -/* ENA Asynchronous Event Notification Queue Link Change descriptor. */ struct ena_admin_aenq_link_change_desc { - /* words 0:3 : */ struct ena_admin_aenq_common_desc aenq_common_desc; - /* word 4 : */ /* 0 : link_status */ uint32_t flags; }; -/* ENA MMIO Readless response interface */ +struct ena_admin_aenq_keep_alive_desc { + struct ena_admin_aenq_common_desc aenq_common_desc; + + uint32_t rx_drops_low; + + uint32_t rx_drops_high; +}; + struct ena_admin_ena_mmio_req_read_less_resp { - /* word 0 : */ - /* request id */ uint16_t req_id; - /* register offset */ uint16_t reg_off; - /* word 1 : value is valid when poll is cleared */ + /* value is valid when poll is cleared */ uint32_t reg_val; }; @@ -1220,8 +992,7 @@ struct ena_admin_ena_mmio_req_read_less_resp { /* feature_rss_flow_hash_function */ #define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_FUNCTION_FUNCS_MASK GENMASK(7, 0) -#define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_FUNCTION_SELECTED_FUNC_MASK \ - GENMASK(7, 0) +#define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_FUNCTION_SELECTED_FUNC_MASK GENMASK(7, 0) /* feature_rss_flow_hash_input */ #define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L3_SORT_SHIFT 1 @@ -1247,653 +1018,392 @@ struct ena_admin_ena_mmio_req_read_less_resp { #define ENA_ADMIN_AENQ_LINK_CHANGE_DESC_LINK_STATUS_MASK BIT(0) #if !defined(ENA_DEFS_LINUX_MAINLINE) -static inline uint16_t -get_ena_admin_aq_common_desc_command_id( - const struct ena_admin_aq_common_desc *p) +static inline uint16_t get_ena_admin_aq_common_desc_command_id(const struct ena_admin_aq_common_desc *p) { return p->command_id & ENA_ADMIN_AQ_COMMON_DESC_COMMAND_ID_MASK; } -static inline void -set_ena_admin_aq_common_desc_command_id(struct ena_admin_aq_common_desc *p, - uint16_t val) +static inline void set_ena_admin_aq_common_desc_command_id(struct ena_admin_aq_common_desc *p, uint16_t val) { p->command_id |= val & ENA_ADMIN_AQ_COMMON_DESC_COMMAND_ID_MASK; } -static inline uint8_t -get_ena_admin_aq_common_desc_phase(const struct ena_admin_aq_common_desc *p) +static inline uint8_t get_ena_admin_aq_common_desc_phase(const struct ena_admin_aq_common_desc *p) { return p->flags & ENA_ADMIN_AQ_COMMON_DESC_PHASE_MASK; } -static inline void -set_ena_admin_aq_common_desc_phase(struct ena_admin_aq_common_desc *p, - uint8_t val) +static inline void set_ena_admin_aq_common_desc_phase(struct ena_admin_aq_common_desc *p, uint8_t val) { p->flags |= val & ENA_ADMIN_AQ_COMMON_DESC_PHASE_MASK; } -static inline uint8_t -get_ena_admin_aq_common_desc_ctrl_data( - const struct ena_admin_aq_common_desc *p) +static inline uint8_t get_ena_admin_aq_common_desc_ctrl_data(const struct ena_admin_aq_common_desc *p) { - return (p->flags & ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_MASK) >> - ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_SHIFT; + return (p->flags & ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_MASK) >> ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_SHIFT; } -static inline void -set_ena_admin_aq_common_desc_ctrl_data(struct ena_admin_aq_common_desc *p, - uint8_t val) +static inline void set_ena_admin_aq_common_desc_ctrl_data(struct ena_admin_aq_common_desc *p, uint8_t val) { - p->flags |= (val << ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_SHIFT) - & ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_MASK; + p->flags |= (val << ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_SHIFT) & ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_MASK; } -static inline uint8_t -get_ena_admin_aq_common_desc_ctrl_data_indirect( - const struct ena_admin_aq_common_desc *p) +static inline uint8_t get_ena_admin_aq_common_desc_ctrl_data_indirect(const struct ena_admin_aq_common_desc *p) { - return (p->flags & ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT_MASK) - >> ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT_SHIFT; + return (p->flags & ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT_MASK) >> ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT_SHIFT; } -static inline void -set_ena_admin_aq_common_desc_ctrl_data_indirect( - struct ena_admin_aq_common_desc *p, - uint8_t val) +static inline void set_ena_admin_aq_common_desc_ctrl_data_indirect(struct ena_admin_aq_common_desc *p, uint8_t val) { - p->flags |= (val << ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT_SHIFT) - & ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT_MASK; + p->flags |= (val << ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT_SHIFT) & ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT_MASK; } -static inline uint8_t -get_ena_admin_sq_sq_direction(const struct ena_admin_sq *p) +static inline uint8_t get_ena_admin_sq_sq_direction(const struct ena_admin_sq *p) { - return (p->sq_identity & ENA_ADMIN_SQ_SQ_DIRECTION_MASK) - >> ENA_ADMIN_SQ_SQ_DIRECTION_SHIFT; + return (p->sq_identity & ENA_ADMIN_SQ_SQ_DIRECTION_MASK) >> ENA_ADMIN_SQ_SQ_DIRECTION_SHIFT; } -static inline void -set_ena_admin_sq_sq_direction(struct ena_admin_sq *p, uint8_t val) +static inline void set_ena_admin_sq_sq_direction(struct ena_admin_sq *p, uint8_t val) { - p->sq_identity |= (val << ENA_ADMIN_SQ_SQ_DIRECTION_SHIFT) & - ENA_ADMIN_SQ_SQ_DIRECTION_MASK; + p->sq_identity |= (val << ENA_ADMIN_SQ_SQ_DIRECTION_SHIFT) & ENA_ADMIN_SQ_SQ_DIRECTION_MASK; } -static inline uint16_t -get_ena_admin_acq_common_desc_command_id( - const struct ena_admin_acq_common_desc *p) +static inline uint16_t get_ena_admin_acq_common_desc_command_id(const struct ena_admin_acq_common_desc *p) { return p->command & ENA_ADMIN_ACQ_COMMON_DESC_COMMAND_ID_MASK; } -static inline void -set_ena_admin_acq_common_desc_command_id(struct ena_admin_acq_common_desc *p, - uint16_t val) +static inline void set_ena_admin_acq_common_desc_command_id(struct ena_admin_acq_common_desc *p, uint16_t val) { p->command |= val & ENA_ADMIN_ACQ_COMMON_DESC_COMMAND_ID_MASK; } -static inline uint8_t -get_ena_admin_acq_common_desc_phase(const struct ena_admin_acq_common_desc *p) +static inline uint8_t get_ena_admin_acq_common_desc_phase(const struct ena_admin_acq_common_desc *p) { return p->flags & ENA_ADMIN_ACQ_COMMON_DESC_PHASE_MASK; } -static inline void -set_ena_admin_acq_common_desc_phase(struct ena_admin_acq_common_desc *p, - uint8_t val) +static inline void set_ena_admin_acq_common_desc_phase(struct ena_admin_acq_common_desc *p, uint8_t val) { p->flags |= val & ENA_ADMIN_ACQ_COMMON_DESC_PHASE_MASK; } -static inline uint8_t -get_ena_admin_aq_create_sq_cmd_sq_direction( - const struct ena_admin_aq_create_sq_cmd *p) +static inline uint8_t get_ena_admin_aq_create_sq_cmd_sq_direction(const struct ena_admin_aq_create_sq_cmd *p) { - return (p->sq_identity & ENA_ADMIN_AQ_CREATE_SQ_CMD_SQ_DIRECTION_MASK) - >> ENA_ADMIN_AQ_CREATE_SQ_CMD_SQ_DIRECTION_SHIFT; + return (p->sq_identity & ENA_ADMIN_AQ_CREATE_SQ_CMD_SQ_DIRECTION_MASK) >> ENA_ADMIN_AQ_CREATE_SQ_CMD_SQ_DIRECTION_SHIFT; } -static inline void -set_ena_admin_aq_create_sq_cmd_sq_direction( - struct ena_admin_aq_create_sq_cmd *p, - uint8_t val) +static inline void set_ena_admin_aq_create_sq_cmd_sq_direction(struct ena_admin_aq_create_sq_cmd *p, uint8_t val) { - p->sq_identity |= (val << - ENA_ADMIN_AQ_CREATE_SQ_CMD_SQ_DIRECTION_SHIFT) - & ENA_ADMIN_AQ_CREATE_SQ_CMD_SQ_DIRECTION_MASK; + p->sq_identity |= (val << ENA_ADMIN_AQ_CREATE_SQ_CMD_SQ_DIRECTION_SHIFT) & ENA_ADMIN_AQ_CREATE_SQ_CMD_SQ_DIRECTION_MASK; } -static inline uint8_t -get_ena_admin_aq_create_sq_cmd_placement_policy( - const struct ena_admin_aq_create_sq_cmd *p) +static inline uint8_t get_ena_admin_aq_create_sq_cmd_placement_policy(const struct ena_admin_aq_create_sq_cmd *p) { return p->sq_caps_2 & ENA_ADMIN_AQ_CREATE_SQ_CMD_PLACEMENT_POLICY_MASK; } -static inline void -set_ena_admin_aq_create_sq_cmd_placement_policy( - struct ena_admin_aq_create_sq_cmd *p, - uint8_t val) +static inline void set_ena_admin_aq_create_sq_cmd_placement_policy(struct ena_admin_aq_create_sq_cmd *p, uint8_t val) { p->sq_caps_2 |= val & ENA_ADMIN_AQ_CREATE_SQ_CMD_PLACEMENT_POLICY_MASK; } -static inline uint8_t -get_ena_admin_aq_create_sq_cmd_completion_policy( - const struct ena_admin_aq_create_sq_cmd *p) +static inline uint8_t get_ena_admin_aq_create_sq_cmd_completion_policy(const struct ena_admin_aq_create_sq_cmd *p) { - return (p->sq_caps_2 - & ENA_ADMIN_AQ_CREATE_SQ_CMD_COMPLETION_POLICY_MASK) - >> ENA_ADMIN_AQ_CREATE_SQ_CMD_COMPLETION_POLICY_SHIFT; + return (p->sq_caps_2 & ENA_ADMIN_AQ_CREATE_SQ_CMD_COMPLETION_POLICY_MASK) >> ENA_ADMIN_AQ_CREATE_SQ_CMD_COMPLETION_POLICY_SHIFT; } -static inline void -set_ena_admin_aq_create_sq_cmd_completion_policy( - struct ena_admin_aq_create_sq_cmd *p, - uint8_t val) +static inline void set_ena_admin_aq_create_sq_cmd_completion_policy(struct ena_admin_aq_create_sq_cmd *p, uint8_t val) { - p->sq_caps_2 |= - (val << ENA_ADMIN_AQ_CREATE_SQ_CMD_COMPLETION_POLICY_SHIFT) - & ENA_ADMIN_AQ_CREATE_SQ_CMD_COMPLETION_POLICY_MASK; + p->sq_caps_2 |= (val << ENA_ADMIN_AQ_CREATE_SQ_CMD_COMPLETION_POLICY_SHIFT) & ENA_ADMIN_AQ_CREATE_SQ_CMD_COMPLETION_POLICY_MASK; } -static inline uint8_t -get_ena_admin_aq_create_sq_cmd_is_physically_contiguous( - const struct ena_admin_aq_create_sq_cmd *p) +static inline uint8_t get_ena_admin_aq_create_sq_cmd_is_physically_contiguous(const struct ena_admin_aq_create_sq_cmd *p) { - return p->sq_caps_3 & - ENA_ADMIN_AQ_CREATE_SQ_CMD_IS_PHYSICALLY_CONTIGUOUS_MASK; + return p->sq_caps_3 & ENA_ADMIN_AQ_CREATE_SQ_CMD_IS_PHYSICALLY_CONTIGUOUS_MASK; } -static inline void -set_ena_admin_aq_create_sq_cmd_is_physically_contiguous( - struct ena_admin_aq_create_sq_cmd *p, - uint8_t val) +static inline void set_ena_admin_aq_create_sq_cmd_is_physically_contiguous(struct ena_admin_aq_create_sq_cmd *p, uint8_t val) { - p->sq_caps_3 |= val & - ENA_ADMIN_AQ_CREATE_SQ_CMD_IS_PHYSICALLY_CONTIGUOUS_MASK; + p->sq_caps_3 |= val & ENA_ADMIN_AQ_CREATE_SQ_CMD_IS_PHYSICALLY_CONTIGUOUS_MASK; } -static inline uint8_t -get_ena_admin_aq_create_cq_cmd_interrupt_mode_enabled( - const struct ena_admin_aq_create_cq_cmd *p) +static inline uint8_t get_ena_admin_aq_create_cq_cmd_interrupt_mode_enabled(const struct ena_admin_aq_create_cq_cmd *p) { - return (p->cq_caps_1 & - ENA_ADMIN_AQ_CREATE_CQ_CMD_INTERRUPT_MODE_ENABLED_MASK) - >> ENA_ADMIN_AQ_CREATE_CQ_CMD_INTERRUPT_MODE_ENABLED_SHIFT; + return (p->cq_caps_1 & ENA_ADMIN_AQ_CREATE_CQ_CMD_INTERRUPT_MODE_ENABLED_MASK) >> ENA_ADMIN_AQ_CREATE_CQ_CMD_INTERRUPT_MODE_ENABLED_SHIFT; } -static inline void -set_ena_admin_aq_create_cq_cmd_interrupt_mode_enabled( - struct ena_admin_aq_create_cq_cmd *p, - uint8_t val) +static inline void set_ena_admin_aq_create_cq_cmd_interrupt_mode_enabled(struct ena_admin_aq_create_cq_cmd *p, uint8_t val) { - p->cq_caps_1 |= - (val << ENA_ADMIN_AQ_CREATE_CQ_CMD_INTERRUPT_MODE_ENABLED_SHIFT) - & ENA_ADMIN_AQ_CREATE_CQ_CMD_INTERRUPT_MODE_ENABLED_MASK; + p->cq_caps_1 |= (val << ENA_ADMIN_AQ_CREATE_CQ_CMD_INTERRUPT_MODE_ENABLED_SHIFT) & ENA_ADMIN_AQ_CREATE_CQ_CMD_INTERRUPT_MODE_ENABLED_MASK; } -static inline uint8_t -get_ena_admin_aq_create_cq_cmd_cq_entry_size_words( - const struct ena_admin_aq_create_cq_cmd *p) +static inline uint8_t get_ena_admin_aq_create_cq_cmd_cq_entry_size_words(const struct ena_admin_aq_create_cq_cmd *p) { - return p->cq_caps_2 - & ENA_ADMIN_AQ_CREATE_CQ_CMD_CQ_ENTRY_SIZE_WORDS_MASK; + return p->cq_caps_2 & ENA_ADMIN_AQ_CREATE_CQ_CMD_CQ_ENTRY_SIZE_WORDS_MASK; } -static inline void -set_ena_admin_aq_create_cq_cmd_cq_entry_size_words( - struct ena_admin_aq_create_cq_cmd *p, - uint8_t val) +static inline void set_ena_admin_aq_create_cq_cmd_cq_entry_size_words(struct ena_admin_aq_create_cq_cmd *p, uint8_t val) { - p->cq_caps_2 |= - val & ENA_ADMIN_AQ_CREATE_CQ_CMD_CQ_ENTRY_SIZE_WORDS_MASK; + p->cq_caps_2 |= val & ENA_ADMIN_AQ_CREATE_CQ_CMD_CQ_ENTRY_SIZE_WORDS_MASK; } -static inline uint8_t -get_ena_admin_get_set_feature_common_desc_select( - const struct ena_admin_get_set_feature_common_desc *p) +static inline uint8_t get_ena_admin_get_set_feature_common_desc_select(const struct ena_admin_get_set_feature_common_desc *p) { return p->flags & ENA_ADMIN_GET_SET_FEATURE_COMMON_DESC_SELECT_MASK; } -static inline void -set_ena_admin_get_set_feature_common_desc_select( - struct ena_admin_get_set_feature_common_desc *p, - uint8_t val) +static inline void set_ena_admin_get_set_feature_common_desc_select(struct ena_admin_get_set_feature_common_desc *p, uint8_t val) { p->flags |= val & ENA_ADMIN_GET_SET_FEATURE_COMMON_DESC_SELECT_MASK; } -static inline uint32_t -get_ena_admin_get_feature_link_desc_autoneg( - const struct ena_admin_get_feature_link_desc *p) +static inline uint32_t get_ena_admin_get_feature_link_desc_autoneg(const struct ena_admin_get_feature_link_desc *p) { return p->flags & ENA_ADMIN_GET_FEATURE_LINK_DESC_AUTONEG_MASK; } -static inline void -set_ena_admin_get_feature_link_desc_autoneg( - struct ena_admin_get_feature_link_desc *p, - uint32_t val) +static inline void set_ena_admin_get_feature_link_desc_autoneg(struct ena_admin_get_feature_link_desc *p, uint32_t val) { p->flags |= val & ENA_ADMIN_GET_FEATURE_LINK_DESC_AUTONEG_MASK; } -static inline uint32_t -get_ena_admin_get_feature_link_desc_duplex( - const struct ena_admin_get_feature_link_desc *p) +static inline uint32_t get_ena_admin_get_feature_link_desc_duplex(const struct ena_admin_get_feature_link_desc *p) { - return (p->flags & ENA_ADMIN_GET_FEATURE_LINK_DESC_DUPLEX_MASK) - >> ENA_ADMIN_GET_FEATURE_LINK_DESC_DUPLEX_SHIFT; + return (p->flags & ENA_ADMIN_GET_FEATURE_LINK_DESC_DUPLEX_MASK) >> ENA_ADMIN_GET_FEATURE_LINK_DESC_DUPLEX_SHIFT; } -static inline void -set_ena_admin_get_feature_link_desc_duplex( - struct ena_admin_get_feature_link_desc *p, - uint32_t val) +static inline void set_ena_admin_get_feature_link_desc_duplex(struct ena_admin_get_feature_link_desc *p, uint32_t val) { - p->flags |= (val << ENA_ADMIN_GET_FEATURE_LINK_DESC_DUPLEX_SHIFT) - & ENA_ADMIN_GET_FEATURE_LINK_DESC_DUPLEX_MASK; + p->flags |= (val << ENA_ADMIN_GET_FEATURE_LINK_DESC_DUPLEX_SHIFT) & ENA_ADMIN_GET_FEATURE_LINK_DESC_DUPLEX_MASK; } -static inline uint32_t -get_ena_admin_feature_offload_desc_TX_L3_csum_ipv4( - const struct ena_admin_feature_offload_desc *p) +static inline uint32_t get_ena_admin_feature_offload_desc_TX_L3_csum_ipv4(const struct ena_admin_feature_offload_desc *p) { return p->tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L3_CSUM_IPV4_MASK; } -static inline void -set_ena_admin_feature_offload_desc_TX_L3_csum_ipv4( - struct ena_admin_feature_offload_desc *p, - uint32_t val) +static inline void set_ena_admin_feature_offload_desc_TX_L3_csum_ipv4(struct ena_admin_feature_offload_desc *p, uint32_t val) { p->tx |= val & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L3_CSUM_IPV4_MASK; } -static inline uint32_t -get_ena_admin_feature_offload_desc_TX_L4_ipv4_csum_part( - const struct ena_admin_feature_offload_desc *p) +static inline uint32_t get_ena_admin_feature_offload_desc_TX_L4_ipv4_csum_part(const struct ena_admin_feature_offload_desc *p) { - return (p->tx & - ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_PART_MASK) - >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_PART_SHIFT; + return (p->tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_PART_MASK) >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_PART_SHIFT; } -static inline void -set_ena_admin_feature_offload_desc_TX_L4_ipv4_csum_part( - struct ena_admin_feature_offload_desc *p, - uint32_t val) +static inline void set_ena_admin_feature_offload_desc_TX_L4_ipv4_csum_part(struct ena_admin_feature_offload_desc *p, uint32_t val) { - p->tx |= (val << - ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_PART_SHIFT) - & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_PART_MASK; + p->tx |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_PART_SHIFT) & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_PART_MASK; } -static inline uint32_t -get_ena_admin_feature_offload_desc_TX_L4_ipv4_csum_full( - const struct ena_admin_feature_offload_desc *p) +static inline uint32_t get_ena_admin_feature_offload_desc_TX_L4_ipv4_csum_full(const struct ena_admin_feature_offload_desc *p) { - return (p->tx & - ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_FULL_MASK) - >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_FULL_SHIFT; + return (p->tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_FULL_MASK) >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_FULL_SHIFT; } -static inline void -set_ena_admin_feature_offload_desc_TX_L4_ipv4_csum_full( - struct ena_admin_feature_offload_desc *p, - uint32_t val) +static inline void set_ena_admin_feature_offload_desc_TX_L4_ipv4_csum_full(struct ena_admin_feature_offload_desc *p, uint32_t val) { - p->tx |= (val << - ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_FULL_SHIFT) - & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_FULL_MASK; + p->tx |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_FULL_SHIFT) & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_FULL_MASK; } -static inline uint32_t -get_ena_admin_feature_offload_desc_TX_L4_ipv6_csum_part( - const struct ena_admin_feature_offload_desc *p) +static inline uint32_t get_ena_admin_feature_offload_desc_TX_L4_ipv6_csum_part(const struct ena_admin_feature_offload_desc *p) { - return (p->tx & - ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_PART_MASK) - >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_PART_SHIFT; + return (p->tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_PART_MASK) >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_PART_SHIFT; } -static inline void -set_ena_admin_feature_offload_desc_TX_L4_ipv6_csum_part( - struct ena_admin_feature_offload_desc *p, - uint32_t val) +static inline void set_ena_admin_feature_offload_desc_TX_L4_ipv6_csum_part(struct ena_admin_feature_offload_desc *p, uint32_t val) { - p->tx |= (val << - ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_PART_SHIFT) - & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_PART_MASK; + p->tx |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_PART_SHIFT) & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_PART_MASK; } -static inline uint32_t -get_ena_admin_feature_offload_desc_TX_L4_ipv6_csum_full( - const struct ena_admin_feature_offload_desc *p) +static inline uint32_t get_ena_admin_feature_offload_desc_TX_L4_ipv6_csum_full(const struct ena_admin_feature_offload_desc *p) { - return (p->tx & - ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_FULL_MASK) - >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_FULL_SHIFT; + return (p->tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_FULL_MASK) >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_FULL_SHIFT; } -static inline void -set_ena_admin_feature_offload_desc_TX_L4_ipv6_csum_full( - struct ena_admin_feature_offload_desc *p, - uint32_t val) +static inline void set_ena_admin_feature_offload_desc_TX_L4_ipv6_csum_full(struct ena_admin_feature_offload_desc *p, uint32_t val) { - p->tx |= (val << - ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_FULL_SHIFT) - & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_FULL_MASK; + p->tx |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_FULL_SHIFT) & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_FULL_MASK; } -static inline uint32_t -get_ena_admin_feature_offload_desc_tso_ipv4( - const struct ena_admin_feature_offload_desc *p) +static inline uint32_t get_ena_admin_feature_offload_desc_tso_ipv4(const struct ena_admin_feature_offload_desc *p) { - return (p->tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV4_MASK) - >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV4_SHIFT; + return (p->tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV4_MASK) >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV4_SHIFT; } -static inline void -set_ena_admin_feature_offload_desc_tso_ipv4( - struct ena_admin_feature_offload_desc *p, - uint32_t val) +static inline void set_ena_admin_feature_offload_desc_tso_ipv4(struct ena_admin_feature_offload_desc *p, uint32_t val) { - p->tx |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV4_SHIFT) - & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV4_MASK; + p->tx |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV4_SHIFT) & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV4_MASK; } -static inline uint32_t -get_ena_admin_feature_offload_desc_tso_ipv6( - const struct ena_admin_feature_offload_desc *p) +static inline uint32_t get_ena_admin_feature_offload_desc_tso_ipv6(const struct ena_admin_feature_offload_desc *p) { - return (p->tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV6_MASK) - >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV6_SHIFT; + return (p->tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV6_MASK) >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV6_SHIFT; } -static inline void -set_ena_admin_feature_offload_desc_tso_ipv6( - struct ena_admin_feature_offload_desc *p, - uint32_t val) +static inline void set_ena_admin_feature_offload_desc_tso_ipv6(struct ena_admin_feature_offload_desc *p, uint32_t val) { - p->tx |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV6_SHIFT) - & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV6_MASK; + p->tx |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV6_SHIFT) & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV6_MASK; } -static inline uint32_t -get_ena_admin_feature_offload_desc_tso_ecn( - const struct ena_admin_feature_offload_desc *p) +static inline uint32_t get_ena_admin_feature_offload_desc_tso_ecn(const struct ena_admin_feature_offload_desc *p) { - return (p->tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_ECN_MASK) - >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_ECN_SHIFT; + return (p->tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_ECN_MASK) >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_ECN_SHIFT; } -static inline void -set_ena_admin_feature_offload_desc_tso_ecn( - struct ena_admin_feature_offload_desc *p, - uint32_t val) +static inline void set_ena_admin_feature_offload_desc_tso_ecn(struct ena_admin_feature_offload_desc *p, uint32_t val) { - p->tx |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_ECN_SHIFT) - & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_ECN_MASK; + p->tx |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_ECN_SHIFT) & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_ECN_MASK; } -static inline uint32_t -get_ena_admin_feature_offload_desc_RX_L3_csum_ipv4( - const struct ena_admin_feature_offload_desc *p) +static inline uint32_t get_ena_admin_feature_offload_desc_RX_L3_csum_ipv4(const struct ena_admin_feature_offload_desc *p) { - return p->rx_supported & - ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L3_CSUM_IPV4_MASK; + return p->rx_supported & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L3_CSUM_IPV4_MASK; } -static inline void -set_ena_admin_feature_offload_desc_RX_L3_csum_ipv4( - struct ena_admin_feature_offload_desc *p, - uint32_t val) +static inline void set_ena_admin_feature_offload_desc_RX_L3_csum_ipv4(struct ena_admin_feature_offload_desc *p, uint32_t val) { - p->rx_supported |= - val & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L3_CSUM_IPV4_MASK; + p->rx_supported |= val & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L3_CSUM_IPV4_MASK; } -static inline uint32_t -get_ena_admin_feature_offload_desc_RX_L4_ipv4_csum( - const struct ena_admin_feature_offload_desc *p) +static inline uint32_t get_ena_admin_feature_offload_desc_RX_L4_ipv4_csum(const struct ena_admin_feature_offload_desc *p) { - return (p->rx_supported & - ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_MASK) - >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_SHIFT; + return (p->rx_supported & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_MASK) >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_SHIFT; } -static inline void -set_ena_admin_feature_offload_desc_RX_L4_ipv4_csum( - struct ena_admin_feature_offload_desc *p, - uint32_t val) +static inline void set_ena_admin_feature_offload_desc_RX_L4_ipv4_csum(struct ena_admin_feature_offload_desc *p, uint32_t val) { - p->rx_supported |= - (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_SHIFT) - & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_MASK; + p->rx_supported |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_SHIFT) & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_MASK; } -static inline uint32_t -get_ena_admin_feature_offload_desc_RX_L4_ipv6_csum( - const struct ena_admin_feature_offload_desc *p) +static inline uint32_t get_ena_admin_feature_offload_desc_RX_L4_ipv6_csum(const struct ena_admin_feature_offload_desc *p) { - return (p->rx_supported & - ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV6_CSUM_MASK) - >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV6_CSUM_SHIFT; + return (p->rx_supported & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV6_CSUM_MASK) >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV6_CSUM_SHIFT; } -static inline void -set_ena_admin_feature_offload_desc_RX_L4_ipv6_csum( - struct ena_admin_feature_offload_desc *p, - uint32_t val) +static inline void set_ena_admin_feature_offload_desc_RX_L4_ipv6_csum(struct ena_admin_feature_offload_desc *p, uint32_t val) { - p->rx_supported |= - (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV6_CSUM_SHIFT) - & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV6_CSUM_MASK; + p->rx_supported |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV6_CSUM_SHIFT) & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV6_CSUM_MASK; } -static inline uint32_t -get_ena_admin_feature_offload_desc_RX_hash( - const struct ena_admin_feature_offload_desc *p) +static inline uint32_t get_ena_admin_feature_offload_desc_RX_hash(const struct ena_admin_feature_offload_desc *p) { - return (p->rx_supported & - ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_HASH_MASK) - >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_HASH_SHIFT; + return (p->rx_supported & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_HASH_MASK) >> ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_HASH_SHIFT; } -static inline void -set_ena_admin_feature_offload_desc_RX_hash( - struct ena_admin_feature_offload_desc *p, - uint32_t val) +static inline void set_ena_admin_feature_offload_desc_RX_hash(struct ena_admin_feature_offload_desc *p, uint32_t val) { - p->rx_supported |= - (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_HASH_SHIFT) - & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_HASH_MASK; + p->rx_supported |= (val << ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_HASH_SHIFT) & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_HASH_MASK; } -static inline uint32_t -get_ena_admin_feature_rss_flow_hash_function_funcs( - const struct ena_admin_feature_rss_flow_hash_function *p) +static inline uint32_t get_ena_admin_feature_rss_flow_hash_function_funcs(const struct ena_admin_feature_rss_flow_hash_function *p) { - return p->supported_func & - ENA_ADMIN_FEATURE_RSS_FLOW_HASH_FUNCTION_FUNCS_MASK; + return p->supported_func & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_FUNCTION_FUNCS_MASK; } -static inline void -set_ena_admin_feature_rss_flow_hash_function_funcs( - struct ena_admin_feature_rss_flow_hash_function *p, - uint32_t val) +static inline void set_ena_admin_feature_rss_flow_hash_function_funcs(struct ena_admin_feature_rss_flow_hash_function *p, uint32_t val) { - p->supported_func |= - val & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_FUNCTION_FUNCS_MASK; + p->supported_func |= val & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_FUNCTION_FUNCS_MASK; } -static inline uint32_t -get_ena_admin_feature_rss_flow_hash_function_selected_func( - const struct ena_admin_feature_rss_flow_hash_function *p) +static inline uint32_t get_ena_admin_feature_rss_flow_hash_function_selected_func(const struct ena_admin_feature_rss_flow_hash_function *p) { - return p->selected_func & - ENA_ADMIN_FEATURE_RSS_FLOW_HASH_FUNCTION_SELECTED_FUNC_MASK; + return p->selected_func & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_FUNCTION_SELECTED_FUNC_MASK; } -static inline void -set_ena_admin_feature_rss_flow_hash_function_selected_func( - struct ena_admin_feature_rss_flow_hash_function *p, - uint32_t val) +static inline void set_ena_admin_feature_rss_flow_hash_function_selected_func(struct ena_admin_feature_rss_flow_hash_function *p, uint32_t val) { - p->selected_func |= - val & - ENA_ADMIN_FEATURE_RSS_FLOW_HASH_FUNCTION_SELECTED_FUNC_MASK; + p->selected_func |= val & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_FUNCTION_SELECTED_FUNC_MASK; } -static inline uint16_t -get_ena_admin_feature_rss_flow_hash_input_L3_sort( - const struct ena_admin_feature_rss_flow_hash_input *p) +static inline uint16_t get_ena_admin_feature_rss_flow_hash_input_L3_sort(const struct ena_admin_feature_rss_flow_hash_input *p) { - return (p->supported_input_sort & - ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L3_SORT_MASK) - >> ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L3_SORT_SHIFT; + return (p->supported_input_sort & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L3_SORT_MASK) >> ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L3_SORT_SHIFT; } -static inline void -set_ena_admin_feature_rss_flow_hash_input_L3_sort( - struct ena_admin_feature_rss_flow_hash_input *p, uint16_t val) +static inline void set_ena_admin_feature_rss_flow_hash_input_L3_sort(struct ena_admin_feature_rss_flow_hash_input *p, uint16_t val) { - p->supported_input_sort |= - (val << ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L3_SORT_SHIFT) - & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L3_SORT_MASK; + p->supported_input_sort |= (val << ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L3_SORT_SHIFT) & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L3_SORT_MASK; } -static inline uint16_t -get_ena_admin_feature_rss_flow_hash_input_L4_sort( - const struct ena_admin_feature_rss_flow_hash_input *p) +static inline uint16_t get_ena_admin_feature_rss_flow_hash_input_L4_sort(const struct ena_admin_feature_rss_flow_hash_input *p) { - return (p->supported_input_sort & - ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L4_SORT_MASK) - >> ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L4_SORT_SHIFT; + return (p->supported_input_sort & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L4_SORT_MASK) >> ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L4_SORT_SHIFT; } -static inline void -set_ena_admin_feature_rss_flow_hash_input_L4_sort( - struct ena_admin_feature_rss_flow_hash_input *p, - uint16_t val) +static inline void set_ena_admin_feature_rss_flow_hash_input_L4_sort(struct ena_admin_feature_rss_flow_hash_input *p, uint16_t val) { - p->supported_input_sort |= - (val << ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L4_SORT_SHIFT) - & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L4_SORT_MASK; + p->supported_input_sort |= (val << ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L4_SORT_SHIFT) & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L4_SORT_MASK; } -static inline uint16_t -get_ena_admin_feature_rss_flow_hash_input_enable_L3_sort( - const struct ena_admin_feature_rss_flow_hash_input *p) +static inline uint16_t get_ena_admin_feature_rss_flow_hash_input_enable_L3_sort(const struct ena_admin_feature_rss_flow_hash_input *p) { - return (p->enabled_input_sort & - ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L3_SORT_MASK) - >> ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L3_SORT_SHIFT; + return (p->enabled_input_sort & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L3_SORT_MASK) >> ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L3_SORT_SHIFT; } -static inline void -set_ena_admin_feature_rss_flow_hash_input_enable_L3_sort( - struct ena_admin_feature_rss_flow_hash_input *p, - uint16_t val) +static inline void set_ena_admin_feature_rss_flow_hash_input_enable_L3_sort(struct ena_admin_feature_rss_flow_hash_input *p, uint16_t val) { - p->enabled_input_sort |= - (val << - ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L3_SORT_SHIFT) - & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L3_SORT_MASK; + p->enabled_input_sort |= (val << ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L3_SORT_SHIFT) & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L3_SORT_MASK; } -static inline uint16_t -get_ena_admin_feature_rss_flow_hash_input_enable_L4_sort( - const struct ena_admin_feature_rss_flow_hash_input *p) +static inline uint16_t get_ena_admin_feature_rss_flow_hash_input_enable_L4_sort(const struct ena_admin_feature_rss_flow_hash_input *p) { - return (p->enabled_input_sort & - ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L4_SORT_MASK) - >> ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L4_SORT_SHIFT; + return (p->enabled_input_sort & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L4_SORT_MASK) >> ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L4_SORT_SHIFT; } -static inline void -set_ena_admin_feature_rss_flow_hash_input_enable_L4_sort( - struct ena_admin_feature_rss_flow_hash_input *p, - uint16_t val) +static inline void set_ena_admin_feature_rss_flow_hash_input_enable_L4_sort(struct ena_admin_feature_rss_flow_hash_input *p, uint16_t val) { - p->enabled_input_sort |= - (val << - ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L4_SORT_SHIFT) - & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L4_SORT_MASK; + p->enabled_input_sort |= (val << ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L4_SORT_SHIFT) & ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L4_SORT_MASK; } -static inline uint32_t -get_ena_admin_host_info_major(const struct ena_admin_host_info *p) +static inline uint32_t get_ena_admin_host_info_major(const struct ena_admin_host_info *p) { return p->driver_version & ENA_ADMIN_HOST_INFO_MAJOR_MASK; } -static inline void -set_ena_admin_host_info_major(struct ena_admin_host_info *p, uint32_t val) +static inline void set_ena_admin_host_info_major(struct ena_admin_host_info *p, uint32_t val) { p->driver_version |= val & ENA_ADMIN_HOST_INFO_MAJOR_MASK; } -static inline uint32_t -get_ena_admin_host_info_minor(const struct ena_admin_host_info *p) +static inline uint32_t get_ena_admin_host_info_minor(const struct ena_admin_host_info *p) { - return (p->driver_version & ENA_ADMIN_HOST_INFO_MINOR_MASK) - >> ENA_ADMIN_HOST_INFO_MINOR_SHIFT; + return (p->driver_version & ENA_ADMIN_HOST_INFO_MINOR_MASK) >> ENA_ADMIN_HOST_INFO_MINOR_SHIFT; } -static inline void -set_ena_admin_host_info_minor(struct ena_admin_host_info *p, uint32_t val) +static inline void set_ena_admin_host_info_minor(struct ena_admin_host_info *p, uint32_t val) { - p->driver_version |= (val << ENA_ADMIN_HOST_INFO_MINOR_SHIFT) - & ENA_ADMIN_HOST_INFO_MINOR_MASK; + p->driver_version |= (val << ENA_ADMIN_HOST_INFO_MINOR_SHIFT) & ENA_ADMIN_HOST_INFO_MINOR_MASK; } -static inline uint32_t -get_ena_admin_host_info_sub_minor(const struct ena_admin_host_info *p) +static inline uint32_t get_ena_admin_host_info_sub_minor(const struct ena_admin_host_info *p) { - return (p->driver_version & ENA_ADMIN_HOST_INFO_SUB_MINOR_MASK) - >> ENA_ADMIN_HOST_INFO_SUB_MINOR_SHIFT; + return (p->driver_version & ENA_ADMIN_HOST_INFO_SUB_MINOR_MASK) >> ENA_ADMIN_HOST_INFO_SUB_MINOR_SHIFT; } -static inline void -set_ena_admin_host_info_sub_minor(struct ena_admin_host_info *p, uint32_t val) +static inline void set_ena_admin_host_info_sub_minor(struct ena_admin_host_info *p, uint32_t val) { - p->driver_version |= (val << ENA_ADMIN_HOST_INFO_SUB_MINOR_SHIFT) - & ENA_ADMIN_HOST_INFO_SUB_MINOR_MASK; + p->driver_version |= (val << ENA_ADMIN_HOST_INFO_SUB_MINOR_SHIFT) & ENA_ADMIN_HOST_INFO_SUB_MINOR_MASK; } -static inline uint8_t -get_ena_admin_aenq_common_desc_phase( - const struct ena_admin_aenq_common_desc *p) +static inline uint8_t get_ena_admin_aenq_common_desc_phase(const struct ena_admin_aenq_common_desc *p) { return p->flags & ENA_ADMIN_AENQ_COMMON_DESC_PHASE_MASK; } -static inline void -set_ena_admin_aenq_common_desc_phase( - struct ena_admin_aenq_common_desc *p, - uint8_t val) +static inline void set_ena_admin_aenq_common_desc_phase(struct ena_admin_aenq_common_desc *p, uint8_t val) { p->flags |= val & ENA_ADMIN_AENQ_COMMON_DESC_PHASE_MASK; } -static inline uint32_t -get_ena_admin_aenq_link_change_desc_link_status( - const struct ena_admin_aenq_link_change_desc *p) +static inline uint32_t get_ena_admin_aenq_link_change_desc_link_status(const struct ena_admin_aenq_link_change_desc *p) { return p->flags & ENA_ADMIN_AENQ_LINK_CHANGE_DESC_LINK_STATUS_MASK; } -static inline void -set_ena_admin_aenq_link_change_desc_link_status( - struct ena_admin_aenq_link_change_desc *p, - uint32_t val) +static inline void set_ena_admin_aenq_link_change_desc_link_status(struct ena_admin_aenq_link_change_desc *p, uint32_t val) { p->flags |= val & ENA_ADMIN_AENQ_LINK_CHANGE_DESC_LINK_STATUS_MASK; } diff --git a/drivers/net/ena/base/ena_defs/ena_common_defs.h b/drivers/net/ena/base/ena_defs/ena_common_defs.h index 95e0f3897..072e6c1f1 100644 --- a/drivers/net/ena/base/ena_defs/ena_common_defs.h +++ b/drivers/net/ena/base/ena_defs/ena_common_defs.h @@ -34,17 +34,13 @@ #ifndef _ENA_COMMON_H_ #define _ENA_COMMON_H_ -/* spec version */ -#define ENA_COMMON_SPEC_VERSION_MAJOR 0 /* spec version major */ -#define ENA_COMMON_SPEC_VERSION_MINOR 10 /* spec version minor */ +#define ENA_COMMON_SPEC_VERSION_MAJOR 0 /* */ +#define ENA_COMMON_SPEC_VERSION_MINOR 10 /* */ /* ENA operates with 48-bit memory addresses. ena_mem_addr_t */ struct ena_common_mem_addr { - /* word 0 : low 32 bit of the memory address */ uint32_t mem_addr_low; - /* word 1 : */ - /* high 16 bits of the memory address */ uint16_t mem_addr_high; /* MBZ */ diff --git a/drivers/net/ena/base/ena_defs/ena_eth_io_defs.h b/drivers/net/ena/base/ena_defs/ena_eth_io_defs.h index 6bc3d6a7c..4cf0b205b 100644 --- a/drivers/net/ena/base/ena_defs/ena_eth_io_defs.h +++ b/drivers/net/ena/base/ena_defs/ena_eth_io_defs.h @@ -34,35 +34,30 @@ #ifndef _ENA_ETH_IO_H_ #define _ENA_ETH_IO_H_ -/* Layer 3 protocol index */ enum ena_eth_io_l3_proto_index { - ENA_ETH_IO_L3_PROTO_UNKNOWN = 0, + ENA_ETH_IO_L3_PROTO_UNKNOWN = 0, - ENA_ETH_IO_L3_PROTO_IPV4 = 8, + ENA_ETH_IO_L3_PROTO_IPV4 = 8, - ENA_ETH_IO_L3_PROTO_IPV6 = 11, + ENA_ETH_IO_L3_PROTO_IPV6 = 11, - ENA_ETH_IO_L3_PROTO_FCOE = 21, + ENA_ETH_IO_L3_PROTO_FCOE = 21, - ENA_ETH_IO_L3_PROTO_ROCE = 22, + ENA_ETH_IO_L3_PROTO_ROCE = 22, }; -/* Layer 4 protocol index */ enum ena_eth_io_l4_proto_index { - ENA_ETH_IO_L4_PROTO_UNKNOWN = 0, + ENA_ETH_IO_L4_PROTO_UNKNOWN = 0, - ENA_ETH_IO_L4_PROTO_TCP = 12, + ENA_ETH_IO_L4_PROTO_TCP = 12, - ENA_ETH_IO_L4_PROTO_UDP = 13, + ENA_ETH_IO_L4_PROTO_UDP = 13, - ENA_ETH_IO_L4_PROTO_ROUTEABLE_ROCE = 23, + ENA_ETH_IO_L4_PROTO_ROUTEABLE_ROCE = 23, }; -/* ENA IO Queue Tx descriptor */ struct ena_eth_io_tx_desc { - /* word 0 : */ - /* length, request id and control flags - * 15:0 : length - Buffer length in bytes, must + /* 15:0 : length - Buffer length in bytes, must * include any packet trailers that the ENA supposed * to update like End-to-End CRC, Authentication GMAC * etc. This length must not include the @@ -85,9 +80,7 @@ struct ena_eth_io_tx_desc { */ uint32_t len_ctrl; - /* word 1 : */ - /* ethernet control - * 3:0 : l3_proto_idx - L3 protocol. This field + /* 3:0 : l3_proto_idx - L3 protocol. This field * required when l3_csum_en,l3_csum or tso_en are set. * 4 : DF - IPv4 DF, must be 0 if packet is IPv4 and * DF flags of the IPv4 header is 0. Otherwise must @@ -119,10 +112,8 @@ struct ena_eth_io_tx_desc { */ uint32_t meta_ctrl; - /* word 2 : Buffer address bits[31:0] */ uint32_t buff_addr_lo; - /* word 3 : */ /* address high and header size * 15:0 : addr_hi - Buffer Pointer[47:32] * 23:16 : reserved16_w2 @@ -141,20 +132,16 @@ struct ena_eth_io_tx_desc { uint32_t buff_addr_hi_hdr_sz; }; -/* ENA IO Queue Tx Meta descriptor */ struct ena_eth_io_tx_meta_desc { - /* word 0 : */ - /* length, request id and control flags - * 9:0 : req_id_lo - Request ID[9:0] + /* 9:0 : req_id_lo - Request ID[9:0] * 11:10 : reserved10 - MBZ * 12 : reserved12 - MBZ * 13 : reserved13 - MBZ * 14 : ext_valid - if set, offset fields in Word2 - * are valid Also MSS High in Word 0 and Outer L3 - * Offset High in WORD 0 and bits [31:24] in Word 3 - * 15 : word3_valid - If set Crypto Info[23:0] of - * Word 3 is valid - * 19:16 : mss_hi_ptp + * are valid Also MSS High in Word 0 and bits [31:24] + * in Word 3 + * 15 : reserved15 + * 19:16 : mss_hi * 20 : eth_meta_type - 0: Tx Metadata Descriptor, 1: * Extended Metadata Descriptor * 21 : meta_store - Store extended metadata in queue @@ -175,19 +162,13 @@ struct ena_eth_io_tx_meta_desc { */ uint32_t len_ctrl; - /* word 1 : */ - /* word 1 - * 5:0 : req_id_hi + /* 5:0 : req_id_hi * 31:6 : reserved6 - MBZ */ uint32_t word1; - /* word 2 : */ - /* word 2 - * 7:0 : l3_hdr_len - the header length L3 IP header. - * 15:8 : l3_hdr_off - the offset of the first byte - * in the L3 header from the beginning of the to-be - * transmitted packet. + /* 7:0 : l3_hdr_len + * 15:8 : l3_hdr_off * 21:16 : l4_hdr_len_in_words - counts the L4 header * length in words. there is an explicit assumption * that L4 header appears right after L3 header and @@ -196,13 +177,10 @@ struct ena_eth_io_tx_meta_desc { */ uint32_t word2; - /* word 3 : */ uint32_t reserved; }; -/* ENA IO Queue Tx completions descriptor */ struct ena_eth_io_tx_cdesc { - /* word 0 : */ /* Request ID[15:0] */ uint16_t req_id; @@ -214,24 +192,19 @@ struct ena_eth_io_tx_cdesc { */ uint8_t flags; - /* word 1 : */ uint16_t sub_qid; - /* indicates location of submission queue head */ uint16_t sq_head_idx; }; -/* ENA IO Queue Rx descriptor */ struct ena_eth_io_rx_desc { - /* word 0 : */ /* In bytes. 0 means 64KB */ uint16_t length; /* MBZ */ uint8_t reserved2; - /* control flags - * 0 : phase + /* 0 : phase * 1 : reserved1 - MBZ * 2 : first - Indicates first descriptor in * transaction @@ -242,32 +215,27 @@ struct ena_eth_io_rx_desc { */ uint8_t ctrl; - /* word 1 : */ uint16_t req_id; /* MBZ */ uint16_t reserved6; - /* word 2 : Buffer address bits[31:0] */ uint32_t buff_addr_lo; - /* word 3 : */ - /* Buffer Address bits[47:16] */ uint16_t buff_addr_hi; /* MBZ */ uint16_t reserved16_w3; }; -/* ENA IO Queue Rx Completion Base Descriptor (4-word format). Note: all - * ethernet parsing information are valid only when last=1 +/* 4-word format Note: all ethernet parsing information are valid only when + * last=1 */ struct ena_eth_io_rx_cdesc_base { - /* word 0 : */ - /* 4:0 : l3_proto_idx - L3 protocol index - * 6:5 : src_vlan_cnt - Source VLAN count + /* 4:0 : l3_proto_idx + * 6:5 : src_vlan_cnt * 7 : reserved7 - MBZ - * 12:8 : l4_proto_idx - L4 protocol index + * 12:8 : l4_proto_idx * 13 : l3_csum_err - when set, either the L3 * checksum error detected, or, the controller didn't * validate the checksum. This bit is valid only when @@ -292,56 +260,43 @@ struct ena_eth_io_rx_cdesc_base { */ uint32_t status; - /* word 1 : */ uint16_t length; uint16_t req_id; - /* word 2 : 32-bit hash result */ + /* 32-bit hash result */ uint32_t hash; - /* word 3 : */ - /* submission queue number */ uint16_t sub_qid; uint16_t reserved; }; -/* ENA IO Queue Rx Completion Descriptor (8-word format) */ +/* 8-word format */ struct ena_eth_io_rx_cdesc_ext { - /* words 0:3 : Rx Completion Extended */ struct ena_eth_io_rx_cdesc_base base; - /* word 4 : Completed Buffer address bits[31:0] */ uint32_t buff_addr_lo; - /* word 5 : */ - /* the buffer address used bits[47:32] */ uint16_t buff_addr_hi; uint16_t reserved16; - /* word 6 : Reserved */ uint32_t reserved_w6; - /* word 7 : Reserved */ uint32_t reserved_w7; }; -/* ENA Interrupt Unmask Register */ struct ena_eth_io_intr_reg { - /* word 0 : */ - /* 14:0 : rx_intr_delay - rx interrupt delay value - * 29:15 : tx_intr_delay - tx interrupt delay value - * 30 : intr_unmask - if set, unmasks interrupt + /* 14:0 : rx_intr_delay + * 29:15 : tx_intr_delay + * 30 : intr_unmask * 31 : reserved */ uint32_t intr_control; }; -/* ENA NUMA Node configuration register */ struct ena_eth_io_numa_node_cfg_reg { - /* word 0 : */ /* 7:0 : numa * 30:8 : reserved * 31 : enabled @@ -388,10 +343,8 @@ struct ena_eth_io_numa_node_cfg_reg { #define ENA_ETH_IO_TX_META_DESC_REQ_ID_LO_MASK GENMASK(9, 0) #define ENA_ETH_IO_TX_META_DESC_EXT_VALID_SHIFT 14 #define ENA_ETH_IO_TX_META_DESC_EXT_VALID_MASK BIT(14) -#define ENA_ETH_IO_TX_META_DESC_WORD3_VALID_SHIFT 15 -#define ENA_ETH_IO_TX_META_DESC_WORD3_VALID_MASK BIT(15) -#define ENA_ETH_IO_TX_META_DESC_MSS_HI_PTP_SHIFT 16 -#define ENA_ETH_IO_TX_META_DESC_MSS_HI_PTP_MASK GENMASK(19, 16) +#define ENA_ETH_IO_TX_META_DESC_MSS_HI_SHIFT 16 +#define ENA_ETH_IO_TX_META_DESC_MSS_HI_MASK GENMASK(19, 16) #define ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_SHIFT 20 #define ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_MASK BIT(20) #define ENA_ETH_IO_TX_META_DESC_META_STORE_SHIFT 21 @@ -463,803 +416,544 @@ struct ena_eth_io_numa_node_cfg_reg { #define ENA_ETH_IO_NUMA_NODE_CFG_REG_ENABLED_MASK BIT(31) #if !defined(ENA_DEFS_LINUX_MAINLINE) -static inline uint32_t get_ena_eth_io_tx_desc_length( - const struct ena_eth_io_tx_desc *p) +static inline uint32_t get_ena_eth_io_tx_desc_length(const struct ena_eth_io_tx_desc *p) { return p->len_ctrl & ENA_ETH_IO_TX_DESC_LENGTH_MASK; } -static inline void set_ena_eth_io_tx_desc_length( - struct ena_eth_io_tx_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_desc_length(struct ena_eth_io_tx_desc *p, uint32_t val) { p->len_ctrl |= val & ENA_ETH_IO_TX_DESC_LENGTH_MASK; } -static inline uint32_t get_ena_eth_io_tx_desc_req_id_hi( - const struct ena_eth_io_tx_desc *p) +static inline uint32_t get_ena_eth_io_tx_desc_req_id_hi(const struct ena_eth_io_tx_desc *p) { - return (p->len_ctrl & ENA_ETH_IO_TX_DESC_REQ_ID_HI_MASK) - >> ENA_ETH_IO_TX_DESC_REQ_ID_HI_SHIFT; + return (p->len_ctrl & ENA_ETH_IO_TX_DESC_REQ_ID_HI_MASK) >> ENA_ETH_IO_TX_DESC_REQ_ID_HI_SHIFT; } -static inline void set_ena_eth_io_tx_desc_req_id_hi( - struct ena_eth_io_tx_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_desc_req_id_hi(struct ena_eth_io_tx_desc *p, uint32_t val) { - p->len_ctrl |= - (val << ENA_ETH_IO_TX_DESC_REQ_ID_HI_SHIFT) - & ENA_ETH_IO_TX_DESC_REQ_ID_HI_MASK; + p->len_ctrl |= (val << ENA_ETH_IO_TX_DESC_REQ_ID_HI_SHIFT) & ENA_ETH_IO_TX_DESC_REQ_ID_HI_MASK; } -static inline uint32_t get_ena_eth_io_tx_desc_meta_desc( - const struct ena_eth_io_tx_desc *p) +static inline uint32_t get_ena_eth_io_tx_desc_meta_desc(const struct ena_eth_io_tx_desc *p) { - return (p->len_ctrl & ENA_ETH_IO_TX_DESC_META_DESC_MASK) - >> ENA_ETH_IO_TX_DESC_META_DESC_SHIFT; + return (p->len_ctrl & ENA_ETH_IO_TX_DESC_META_DESC_MASK) >> ENA_ETH_IO_TX_DESC_META_DESC_SHIFT; } -static inline void set_ena_eth_io_tx_desc_meta_desc( - struct ena_eth_io_tx_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_desc_meta_desc(struct ena_eth_io_tx_desc *p, uint32_t val) { - p->len_ctrl |= - (val << ENA_ETH_IO_TX_DESC_META_DESC_SHIFT) - & ENA_ETH_IO_TX_DESC_META_DESC_MASK; + p->len_ctrl |= (val << ENA_ETH_IO_TX_DESC_META_DESC_SHIFT) & ENA_ETH_IO_TX_DESC_META_DESC_MASK; } -static inline uint32_t get_ena_eth_io_tx_desc_phase( - const struct ena_eth_io_tx_desc *p) +static inline uint32_t get_ena_eth_io_tx_desc_phase(const struct ena_eth_io_tx_desc *p) { - return (p->len_ctrl & ENA_ETH_IO_TX_DESC_PHASE_MASK) - >> ENA_ETH_IO_TX_DESC_PHASE_SHIFT; + return (p->len_ctrl & ENA_ETH_IO_TX_DESC_PHASE_MASK) >> ENA_ETH_IO_TX_DESC_PHASE_SHIFT; } -static inline void set_ena_eth_io_tx_desc_phase( - struct ena_eth_io_tx_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_desc_phase(struct ena_eth_io_tx_desc *p, uint32_t val) { - p->len_ctrl |= - (val << ENA_ETH_IO_TX_DESC_PHASE_SHIFT) - & ENA_ETH_IO_TX_DESC_PHASE_MASK; + p->len_ctrl |= (val << ENA_ETH_IO_TX_DESC_PHASE_SHIFT) & ENA_ETH_IO_TX_DESC_PHASE_MASK; } -static inline uint32_t get_ena_eth_io_tx_desc_first( - const struct ena_eth_io_tx_desc *p) +static inline uint32_t get_ena_eth_io_tx_desc_first(const struct ena_eth_io_tx_desc *p) { - return (p->len_ctrl & ENA_ETH_IO_TX_DESC_FIRST_MASK) - >> ENA_ETH_IO_TX_DESC_FIRST_SHIFT; + return (p->len_ctrl & ENA_ETH_IO_TX_DESC_FIRST_MASK) >> ENA_ETH_IO_TX_DESC_FIRST_SHIFT; } -static inline void set_ena_eth_io_tx_desc_first( - struct ena_eth_io_tx_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_desc_first(struct ena_eth_io_tx_desc *p, uint32_t val) { - p->len_ctrl |= - (val << ENA_ETH_IO_TX_DESC_FIRST_SHIFT) - & ENA_ETH_IO_TX_DESC_FIRST_MASK; + p->len_ctrl |= (val << ENA_ETH_IO_TX_DESC_FIRST_SHIFT) & ENA_ETH_IO_TX_DESC_FIRST_MASK; } -static inline uint32_t get_ena_eth_io_tx_desc_last( - const struct ena_eth_io_tx_desc *p) +static inline uint32_t get_ena_eth_io_tx_desc_last(const struct ena_eth_io_tx_desc *p) { - return (p->len_ctrl & ENA_ETH_IO_TX_DESC_LAST_MASK) - >> ENA_ETH_IO_TX_DESC_LAST_SHIFT; + return (p->len_ctrl & ENA_ETH_IO_TX_DESC_LAST_MASK) >> ENA_ETH_IO_TX_DESC_LAST_SHIFT; } -static inline void set_ena_eth_io_tx_desc_last( - struct ena_eth_io_tx_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_desc_last(struct ena_eth_io_tx_desc *p, uint32_t val) { - p->len_ctrl |= - (val << ENA_ETH_IO_TX_DESC_LAST_SHIFT) - & ENA_ETH_IO_TX_DESC_LAST_MASK; + p->len_ctrl |= (val << ENA_ETH_IO_TX_DESC_LAST_SHIFT) & ENA_ETH_IO_TX_DESC_LAST_MASK; } -static inline uint32_t get_ena_eth_io_tx_desc_comp_req( - const struct ena_eth_io_tx_desc *p) +static inline uint32_t get_ena_eth_io_tx_desc_comp_req(const struct ena_eth_io_tx_desc *p) { - return (p->len_ctrl & ENA_ETH_IO_TX_DESC_COMP_REQ_MASK) - >> ENA_ETH_IO_TX_DESC_COMP_REQ_SHIFT; + return (p->len_ctrl & ENA_ETH_IO_TX_DESC_COMP_REQ_MASK) >> ENA_ETH_IO_TX_DESC_COMP_REQ_SHIFT; } -static inline void set_ena_eth_io_tx_desc_comp_req( - struct ena_eth_io_tx_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_desc_comp_req(struct ena_eth_io_tx_desc *p, uint32_t val) { - p->len_ctrl |= - (val << ENA_ETH_IO_TX_DESC_COMP_REQ_SHIFT) - & ENA_ETH_IO_TX_DESC_COMP_REQ_MASK; + p->len_ctrl |= (val << ENA_ETH_IO_TX_DESC_COMP_REQ_SHIFT) & ENA_ETH_IO_TX_DESC_COMP_REQ_MASK; } -static inline uint32_t get_ena_eth_io_tx_desc_l3_proto_idx( - const struct ena_eth_io_tx_desc *p) +static inline uint32_t get_ena_eth_io_tx_desc_l3_proto_idx(const struct ena_eth_io_tx_desc *p) { return p->meta_ctrl & ENA_ETH_IO_TX_DESC_L3_PROTO_IDX_MASK; } -static inline void set_ena_eth_io_tx_desc_l3_proto_idx( - struct ena_eth_io_tx_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_desc_l3_proto_idx(struct ena_eth_io_tx_desc *p, uint32_t val) { p->meta_ctrl |= val & ENA_ETH_IO_TX_DESC_L3_PROTO_IDX_MASK; } -static inline uint32_t get_ena_eth_io_tx_desc_DF( - const struct ena_eth_io_tx_desc *p) +static inline uint32_t get_ena_eth_io_tx_desc_DF(const struct ena_eth_io_tx_desc *p) { - return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_DF_MASK) - >> ENA_ETH_IO_TX_DESC_DF_SHIFT; + return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_DF_MASK) >> ENA_ETH_IO_TX_DESC_DF_SHIFT; } -static inline void set_ena_eth_io_tx_desc_DF( - struct ena_eth_io_tx_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_desc_DF(struct ena_eth_io_tx_desc *p, uint32_t val) { - p->meta_ctrl |= - (val << ENA_ETH_IO_TX_DESC_DF_SHIFT) - & ENA_ETH_IO_TX_DESC_DF_MASK; + p->meta_ctrl |= (val << ENA_ETH_IO_TX_DESC_DF_SHIFT) & ENA_ETH_IO_TX_DESC_DF_MASK; } -static inline uint32_t get_ena_eth_io_tx_desc_tso_en( - const struct ena_eth_io_tx_desc *p) +static inline uint32_t get_ena_eth_io_tx_desc_tso_en(const struct ena_eth_io_tx_desc *p) { - return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_TSO_EN_MASK) - >> ENA_ETH_IO_TX_DESC_TSO_EN_SHIFT; + return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_TSO_EN_MASK) >> ENA_ETH_IO_TX_DESC_TSO_EN_SHIFT; } -static inline void set_ena_eth_io_tx_desc_tso_en( - struct ena_eth_io_tx_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_desc_tso_en(struct ena_eth_io_tx_desc *p, uint32_t val) { - p->meta_ctrl |= - (val << ENA_ETH_IO_TX_DESC_TSO_EN_SHIFT) - & ENA_ETH_IO_TX_DESC_TSO_EN_MASK; + p->meta_ctrl |= (val << ENA_ETH_IO_TX_DESC_TSO_EN_SHIFT) & ENA_ETH_IO_TX_DESC_TSO_EN_MASK; } -static inline uint32_t get_ena_eth_io_tx_desc_l4_proto_idx( - const struct ena_eth_io_tx_desc *p) +static inline uint32_t get_ena_eth_io_tx_desc_l4_proto_idx(const struct ena_eth_io_tx_desc *p) { - return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_L4_PROTO_IDX_MASK) - >> ENA_ETH_IO_TX_DESC_L4_PROTO_IDX_SHIFT; + return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_L4_PROTO_IDX_MASK) >> ENA_ETH_IO_TX_DESC_L4_PROTO_IDX_SHIFT; } -static inline void set_ena_eth_io_tx_desc_l4_proto_idx( - struct ena_eth_io_tx_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_desc_l4_proto_idx(struct ena_eth_io_tx_desc *p, uint32_t val) { - p->meta_ctrl |= - (val << ENA_ETH_IO_TX_DESC_L4_PROTO_IDX_SHIFT) - & ENA_ETH_IO_TX_DESC_L4_PROTO_IDX_MASK; + p->meta_ctrl |= (val << ENA_ETH_IO_TX_DESC_L4_PROTO_IDX_SHIFT) & ENA_ETH_IO_TX_DESC_L4_PROTO_IDX_MASK; } -static inline uint32_t get_ena_eth_io_tx_desc_l3_csum_en( - const struct ena_eth_io_tx_desc *p) +static inline uint32_t get_ena_eth_io_tx_desc_l3_csum_en(const struct ena_eth_io_tx_desc *p) { - return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_L3_CSUM_EN_MASK) - >> ENA_ETH_IO_TX_DESC_L3_CSUM_EN_SHIFT; + return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_L3_CSUM_EN_MASK) >> ENA_ETH_IO_TX_DESC_L3_CSUM_EN_SHIFT; } -static inline void set_ena_eth_io_tx_desc_l3_csum_en( - struct ena_eth_io_tx_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_desc_l3_csum_en(struct ena_eth_io_tx_desc *p, uint32_t val) { - p->meta_ctrl |= - (val << ENA_ETH_IO_TX_DESC_L3_CSUM_EN_SHIFT) - & ENA_ETH_IO_TX_DESC_L3_CSUM_EN_MASK; + p->meta_ctrl |= (val << ENA_ETH_IO_TX_DESC_L3_CSUM_EN_SHIFT) & ENA_ETH_IO_TX_DESC_L3_CSUM_EN_MASK; } -static inline uint32_t get_ena_eth_io_tx_desc_l4_csum_en( - const struct ena_eth_io_tx_desc *p) +static inline uint32_t get_ena_eth_io_tx_desc_l4_csum_en(const struct ena_eth_io_tx_desc *p) { - return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_L4_CSUM_EN_MASK) - >> ENA_ETH_IO_TX_DESC_L4_CSUM_EN_SHIFT; + return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_L4_CSUM_EN_MASK) >> ENA_ETH_IO_TX_DESC_L4_CSUM_EN_SHIFT; } -static inline void set_ena_eth_io_tx_desc_l4_csum_en( - struct ena_eth_io_tx_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_desc_l4_csum_en(struct ena_eth_io_tx_desc *p, uint32_t val) { - p->meta_ctrl |= - (val << ENA_ETH_IO_TX_DESC_L4_CSUM_EN_SHIFT) - & ENA_ETH_IO_TX_DESC_L4_CSUM_EN_MASK; + p->meta_ctrl |= (val << ENA_ETH_IO_TX_DESC_L4_CSUM_EN_SHIFT) & ENA_ETH_IO_TX_DESC_L4_CSUM_EN_MASK; } -static inline uint32_t get_ena_eth_io_tx_desc_ethernet_fcs_dis( - const struct ena_eth_io_tx_desc *p) +static inline uint32_t get_ena_eth_io_tx_desc_ethernet_fcs_dis(const struct ena_eth_io_tx_desc *p) { - return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_ETHERNET_FCS_DIS_MASK) - >> ENA_ETH_IO_TX_DESC_ETHERNET_FCS_DIS_SHIFT; + return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_ETHERNET_FCS_DIS_MASK) >> ENA_ETH_IO_TX_DESC_ETHERNET_FCS_DIS_SHIFT; } -static inline void set_ena_eth_io_tx_desc_ethernet_fcs_dis( - struct ena_eth_io_tx_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_desc_ethernet_fcs_dis(struct ena_eth_io_tx_desc *p, uint32_t val) { - p->meta_ctrl |= - (val << ENA_ETH_IO_TX_DESC_ETHERNET_FCS_DIS_SHIFT) - & ENA_ETH_IO_TX_DESC_ETHERNET_FCS_DIS_MASK; + p->meta_ctrl |= (val << ENA_ETH_IO_TX_DESC_ETHERNET_FCS_DIS_SHIFT) & ENA_ETH_IO_TX_DESC_ETHERNET_FCS_DIS_MASK; } -static inline uint32_t get_ena_eth_io_tx_desc_l4_csum_partial( - const struct ena_eth_io_tx_desc *p) +static inline uint32_t get_ena_eth_io_tx_desc_l4_csum_partial(const struct ena_eth_io_tx_desc *p) { - return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_L4_CSUM_PARTIAL_MASK) - >> ENA_ETH_IO_TX_DESC_L4_CSUM_PARTIAL_SHIFT; + return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_L4_CSUM_PARTIAL_MASK) >> ENA_ETH_IO_TX_DESC_L4_CSUM_PARTIAL_SHIFT; } -static inline void set_ena_eth_io_tx_desc_l4_csum_partial( - struct ena_eth_io_tx_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_desc_l4_csum_partial(struct ena_eth_io_tx_desc *p, uint32_t val) { - p->meta_ctrl |= - (val << ENA_ETH_IO_TX_DESC_L4_CSUM_PARTIAL_SHIFT) - & ENA_ETH_IO_TX_DESC_L4_CSUM_PARTIAL_MASK; + p->meta_ctrl |= (val << ENA_ETH_IO_TX_DESC_L4_CSUM_PARTIAL_SHIFT) & ENA_ETH_IO_TX_DESC_L4_CSUM_PARTIAL_MASK; } -static inline uint32_t get_ena_eth_io_tx_desc_req_id_lo( - const struct ena_eth_io_tx_desc *p) +static inline uint32_t get_ena_eth_io_tx_desc_req_id_lo(const struct ena_eth_io_tx_desc *p) { - return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_REQ_ID_LO_MASK) - >> ENA_ETH_IO_TX_DESC_REQ_ID_LO_SHIFT; + return (p->meta_ctrl & ENA_ETH_IO_TX_DESC_REQ_ID_LO_MASK) >> ENA_ETH_IO_TX_DESC_REQ_ID_LO_SHIFT; } -static inline void set_ena_eth_io_tx_desc_req_id_lo( - struct ena_eth_io_tx_desc *p, uint32_t val) +static inline void set_ena_eth_io_tx_desc_req_id_lo(struct ena_eth_io_tx_desc *p, uint32_t val) { - p->meta_ctrl |= (val << ENA_ETH_IO_TX_DESC_REQ_ID_LO_SHIFT) - & ENA_ETH_IO_TX_DESC_REQ_ID_LO_MASK; + p->meta_ctrl |= (val << ENA_ETH_IO_TX_DESC_REQ_ID_LO_SHIFT) & ENA_ETH_IO_TX_DESC_REQ_ID_LO_MASK; } -static inline uint32_t get_ena_eth_io_tx_desc_addr_hi( - const struct ena_eth_io_tx_desc *p) +static inline uint32_t get_ena_eth_io_tx_desc_addr_hi(const struct ena_eth_io_tx_desc *p) { return p->buff_addr_hi_hdr_sz & ENA_ETH_IO_TX_DESC_ADDR_HI_MASK; } -static inline void set_ena_eth_io_tx_desc_addr_hi( - struct ena_eth_io_tx_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_desc_addr_hi(struct ena_eth_io_tx_desc *p, uint32_t val) { p->buff_addr_hi_hdr_sz |= val & ENA_ETH_IO_TX_DESC_ADDR_HI_MASK; } -static inline uint32_t get_ena_eth_io_tx_desc_header_length( - const struct ena_eth_io_tx_desc *p) +static inline uint32_t get_ena_eth_io_tx_desc_header_length(const struct ena_eth_io_tx_desc *p) { - return (p->buff_addr_hi_hdr_sz & ENA_ETH_IO_TX_DESC_HEADER_LENGTH_MASK) - >> ENA_ETH_IO_TX_DESC_HEADER_LENGTH_SHIFT; + return (p->buff_addr_hi_hdr_sz & ENA_ETH_IO_TX_DESC_HEADER_LENGTH_MASK) >> ENA_ETH_IO_TX_DESC_HEADER_LENGTH_SHIFT; } -static inline void set_ena_eth_io_tx_desc_header_length( - struct ena_eth_io_tx_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_desc_header_length(struct ena_eth_io_tx_desc *p, uint32_t val) { - p->buff_addr_hi_hdr_sz |= - (val << ENA_ETH_IO_TX_DESC_HEADER_LENGTH_SHIFT) - & ENA_ETH_IO_TX_DESC_HEADER_LENGTH_MASK; + p->buff_addr_hi_hdr_sz |= (val << ENA_ETH_IO_TX_DESC_HEADER_LENGTH_SHIFT) & ENA_ETH_IO_TX_DESC_HEADER_LENGTH_MASK; } -static inline uint32_t get_ena_eth_io_tx_meta_desc_req_id_lo( - const struct ena_eth_io_tx_meta_desc *p) +static inline uint32_t get_ena_eth_io_tx_meta_desc_req_id_lo(const struct ena_eth_io_tx_meta_desc *p) { return p->len_ctrl & ENA_ETH_IO_TX_META_DESC_REQ_ID_LO_MASK; } -static inline void set_ena_eth_io_tx_meta_desc_req_id_lo( - struct ena_eth_io_tx_meta_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_meta_desc_req_id_lo(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { p->len_ctrl |= val & ENA_ETH_IO_TX_META_DESC_REQ_ID_LO_MASK; } -static inline uint32_t get_ena_eth_io_tx_meta_desc_ext_valid( - const struct ena_eth_io_tx_meta_desc *p) -{ - return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_EXT_VALID_MASK) - >> ENA_ETH_IO_TX_META_DESC_EXT_VALID_SHIFT; -} - -static inline void set_ena_eth_io_tx_meta_desc_ext_valid( - struct ena_eth_io_tx_meta_desc *p, uint32_t val) -{ - p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_EXT_VALID_SHIFT) - & ENA_ETH_IO_TX_META_DESC_EXT_VALID_MASK; -} - -static inline uint32_t get_ena_eth_io_tx_meta_desc_word3_valid( - const struct ena_eth_io_tx_meta_desc *p) +static inline uint32_t get_ena_eth_io_tx_meta_desc_ext_valid(const struct ena_eth_io_tx_meta_desc *p) { - return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_WORD3_VALID_MASK) - >> ENA_ETH_IO_TX_META_DESC_WORD3_VALID_SHIFT; + return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_EXT_VALID_MASK) >> ENA_ETH_IO_TX_META_DESC_EXT_VALID_SHIFT; } -static inline void set_ena_eth_io_tx_meta_desc_word3_valid( - struct ena_eth_io_tx_meta_desc *p, uint32_t val) +static inline void set_ena_eth_io_tx_meta_desc_ext_valid(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { - p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_WORD3_VALID_SHIFT) - & ENA_ETH_IO_TX_META_DESC_WORD3_VALID_MASK; + p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_EXT_VALID_SHIFT) & ENA_ETH_IO_TX_META_DESC_EXT_VALID_MASK; } -static inline uint32_t get_ena_eth_io_tx_meta_desc_mss_hi_ptp( - const struct ena_eth_io_tx_meta_desc *p) +static inline uint32_t get_ena_eth_io_tx_meta_desc_mss_hi(const struct ena_eth_io_tx_meta_desc *p) { - return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_MSS_HI_PTP_MASK) - >> ENA_ETH_IO_TX_META_DESC_MSS_HI_PTP_SHIFT; + return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_MSS_HI_MASK) >> ENA_ETH_IO_TX_META_DESC_MSS_HI_SHIFT; } -static inline void set_ena_eth_io_tx_meta_desc_mss_hi_ptp( - struct ena_eth_io_tx_meta_desc *p, uint32_t val) +static inline void set_ena_eth_io_tx_meta_desc_mss_hi(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { - p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_MSS_HI_PTP_SHIFT) - & ENA_ETH_IO_TX_META_DESC_MSS_HI_PTP_MASK; + p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_MSS_HI_SHIFT) & ENA_ETH_IO_TX_META_DESC_MSS_HI_MASK; } -static inline uint32_t get_ena_eth_io_tx_meta_desc_eth_meta_type( - const struct ena_eth_io_tx_meta_desc *p) +static inline uint32_t get_ena_eth_io_tx_meta_desc_eth_meta_type(const struct ena_eth_io_tx_meta_desc *p) { - return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_MASK) - >> ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_SHIFT; + return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_MASK) >> ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_SHIFT; } -static inline void set_ena_eth_io_tx_meta_desc_eth_meta_type( - struct ena_eth_io_tx_meta_desc *p, uint32_t val) +static inline void set_ena_eth_io_tx_meta_desc_eth_meta_type(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { - p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_SHIFT) - & ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_MASK; + p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_SHIFT) & ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_MASK; } -static inline uint32_t get_ena_eth_io_tx_meta_desc_meta_store( - const struct ena_eth_io_tx_meta_desc *p) +static inline uint32_t get_ena_eth_io_tx_meta_desc_meta_store(const struct ena_eth_io_tx_meta_desc *p) { - return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_META_STORE_MASK) - >> ENA_ETH_IO_TX_META_DESC_META_STORE_SHIFT; + return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_META_STORE_MASK) >> ENA_ETH_IO_TX_META_DESC_META_STORE_SHIFT; } -static inline void set_ena_eth_io_tx_meta_desc_meta_store( - struct ena_eth_io_tx_meta_desc *p, uint32_t val) +static inline void set_ena_eth_io_tx_meta_desc_meta_store(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { - p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_META_STORE_SHIFT) - & ENA_ETH_IO_TX_META_DESC_META_STORE_MASK; + p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_META_STORE_SHIFT) & ENA_ETH_IO_TX_META_DESC_META_STORE_MASK; } -static inline uint32_t get_ena_eth_io_tx_meta_desc_meta_desc( - const struct ena_eth_io_tx_meta_desc *p) +static inline uint32_t get_ena_eth_io_tx_meta_desc_meta_desc(const struct ena_eth_io_tx_meta_desc *p) { - return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_META_DESC_MASK) - >> ENA_ETH_IO_TX_META_DESC_META_DESC_SHIFT; + return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_META_DESC_MASK) >> ENA_ETH_IO_TX_META_DESC_META_DESC_SHIFT; } -static inline void set_ena_eth_io_tx_meta_desc_meta_desc( - struct ena_eth_io_tx_meta_desc *p, uint32_t val) +static inline void set_ena_eth_io_tx_meta_desc_meta_desc(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { - p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_META_DESC_SHIFT) - & ENA_ETH_IO_TX_META_DESC_META_DESC_MASK; + p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_META_DESC_SHIFT) & ENA_ETH_IO_TX_META_DESC_META_DESC_MASK; } -static inline uint32_t get_ena_eth_io_tx_meta_desc_phase( - const struct ena_eth_io_tx_meta_desc *p) +static inline uint32_t get_ena_eth_io_tx_meta_desc_phase(const struct ena_eth_io_tx_meta_desc *p) { - return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_PHASE_MASK) - >> ENA_ETH_IO_TX_META_DESC_PHASE_SHIFT; + return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_PHASE_MASK) >> ENA_ETH_IO_TX_META_DESC_PHASE_SHIFT; } -static inline void set_ena_eth_io_tx_meta_desc_phase( - struct ena_eth_io_tx_meta_desc *p, uint32_t val) +static inline void set_ena_eth_io_tx_meta_desc_phase(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { - p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_PHASE_SHIFT) - & ENA_ETH_IO_TX_META_DESC_PHASE_MASK; + p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_PHASE_SHIFT) & ENA_ETH_IO_TX_META_DESC_PHASE_MASK; } -static inline uint32_t get_ena_eth_io_tx_meta_desc_first( - const struct ena_eth_io_tx_meta_desc *p) +static inline uint32_t get_ena_eth_io_tx_meta_desc_first(const struct ena_eth_io_tx_meta_desc *p) { - return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_FIRST_MASK) - >> ENA_ETH_IO_TX_META_DESC_FIRST_SHIFT; + return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_FIRST_MASK) >> ENA_ETH_IO_TX_META_DESC_FIRST_SHIFT; } -static inline void set_ena_eth_io_tx_meta_desc_first( - struct ena_eth_io_tx_meta_desc *p, uint32_t val) +static inline void set_ena_eth_io_tx_meta_desc_first(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { - p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_FIRST_SHIFT) - & ENA_ETH_IO_TX_META_DESC_FIRST_MASK; + p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_FIRST_SHIFT) & ENA_ETH_IO_TX_META_DESC_FIRST_MASK; } -static inline uint32_t get_ena_eth_io_tx_meta_desc_last( - const struct ena_eth_io_tx_meta_desc *p) +static inline uint32_t get_ena_eth_io_tx_meta_desc_last(const struct ena_eth_io_tx_meta_desc *p) { - return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_LAST_MASK) - >> ENA_ETH_IO_TX_META_DESC_LAST_SHIFT; + return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_LAST_MASK) >> ENA_ETH_IO_TX_META_DESC_LAST_SHIFT; } -static inline void set_ena_eth_io_tx_meta_desc_last( - struct ena_eth_io_tx_meta_desc *p, uint32_t val) +static inline void set_ena_eth_io_tx_meta_desc_last(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { - p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_LAST_SHIFT) - & ENA_ETH_IO_TX_META_DESC_LAST_MASK; + p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_LAST_SHIFT) & ENA_ETH_IO_TX_META_DESC_LAST_MASK; } -static inline uint32_t get_ena_eth_io_tx_meta_desc_comp_req( - const struct ena_eth_io_tx_meta_desc *p) +static inline uint32_t get_ena_eth_io_tx_meta_desc_comp_req(const struct ena_eth_io_tx_meta_desc *p) { - return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_COMP_REQ_MASK) - >> ENA_ETH_IO_TX_META_DESC_COMP_REQ_SHIFT; + return (p->len_ctrl & ENA_ETH_IO_TX_META_DESC_COMP_REQ_MASK) >> ENA_ETH_IO_TX_META_DESC_COMP_REQ_SHIFT; } -static inline void set_ena_eth_io_tx_meta_desc_comp_req( - struct ena_eth_io_tx_meta_desc *p, uint32_t val) +static inline void set_ena_eth_io_tx_meta_desc_comp_req(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { - p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_COMP_REQ_SHIFT) - & ENA_ETH_IO_TX_META_DESC_COMP_REQ_MASK; + p->len_ctrl |= (val << ENA_ETH_IO_TX_META_DESC_COMP_REQ_SHIFT) & ENA_ETH_IO_TX_META_DESC_COMP_REQ_MASK; } -static inline uint32_t get_ena_eth_io_tx_meta_desc_req_id_hi( - const struct ena_eth_io_tx_meta_desc *p) +static inline uint32_t get_ena_eth_io_tx_meta_desc_req_id_hi(const struct ena_eth_io_tx_meta_desc *p) { return p->word1 & ENA_ETH_IO_TX_META_DESC_REQ_ID_HI_MASK; } -static inline void set_ena_eth_io_tx_meta_desc_req_id_hi( - struct ena_eth_io_tx_meta_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_meta_desc_req_id_hi(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { p->word1 |= val & ENA_ETH_IO_TX_META_DESC_REQ_ID_HI_MASK; } -static inline uint32_t get_ena_eth_io_tx_meta_desc_l3_hdr_len( - const struct ena_eth_io_tx_meta_desc *p) +static inline uint32_t get_ena_eth_io_tx_meta_desc_l3_hdr_len(const struct ena_eth_io_tx_meta_desc *p) { return p->word2 & ENA_ETH_IO_TX_META_DESC_L3_HDR_LEN_MASK; } -static inline void set_ena_eth_io_tx_meta_desc_l3_hdr_len( - struct ena_eth_io_tx_meta_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_meta_desc_l3_hdr_len(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { p->word2 |= val & ENA_ETH_IO_TX_META_DESC_L3_HDR_LEN_MASK; } -static inline uint32_t get_ena_eth_io_tx_meta_desc_l3_hdr_off( - const struct ena_eth_io_tx_meta_desc *p) +static inline uint32_t get_ena_eth_io_tx_meta_desc_l3_hdr_off(const struct ena_eth_io_tx_meta_desc *p) { - return (p->word2 & ENA_ETH_IO_TX_META_DESC_L3_HDR_OFF_MASK) - >> ENA_ETH_IO_TX_META_DESC_L3_HDR_OFF_SHIFT; + return (p->word2 & ENA_ETH_IO_TX_META_DESC_L3_HDR_OFF_MASK) >> ENA_ETH_IO_TX_META_DESC_L3_HDR_OFF_SHIFT; } -static inline void set_ena_eth_io_tx_meta_desc_l3_hdr_off( - struct ena_eth_io_tx_meta_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_meta_desc_l3_hdr_off(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { - p->word2 |= - (val << ENA_ETH_IO_TX_META_DESC_L3_HDR_OFF_SHIFT) - & ENA_ETH_IO_TX_META_DESC_L3_HDR_OFF_MASK; + p->word2 |= (val << ENA_ETH_IO_TX_META_DESC_L3_HDR_OFF_SHIFT) & ENA_ETH_IO_TX_META_DESC_L3_HDR_OFF_MASK; } -static inline uint32_t get_ena_eth_io_tx_meta_desc_l4_hdr_len_in_words( - const struct ena_eth_io_tx_meta_desc *p) +static inline uint32_t get_ena_eth_io_tx_meta_desc_l4_hdr_len_in_words(const struct ena_eth_io_tx_meta_desc *p) { - return (p->word2 & ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_MASK) - >> ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_SHIFT; + return (p->word2 & ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_MASK) >> ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_SHIFT; } -static inline void set_ena_eth_io_tx_meta_desc_l4_hdr_len_in_words( - struct ena_eth_io_tx_meta_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_meta_desc_l4_hdr_len_in_words(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { - p->word2 |= - (val << ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_SHIFT) - & ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_MASK; + p->word2 |= (val << ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_SHIFT) & ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_MASK; } -static inline uint32_t get_ena_eth_io_tx_meta_desc_mss_lo( - const struct ena_eth_io_tx_meta_desc *p) +static inline uint32_t get_ena_eth_io_tx_meta_desc_mss_lo(const struct ena_eth_io_tx_meta_desc *p) { - return (p->word2 & ENA_ETH_IO_TX_META_DESC_MSS_LO_MASK) - >> ENA_ETH_IO_TX_META_DESC_MSS_LO_SHIFT; + return (p->word2 & ENA_ETH_IO_TX_META_DESC_MSS_LO_MASK) >> ENA_ETH_IO_TX_META_DESC_MSS_LO_SHIFT; } -static inline void set_ena_eth_io_tx_meta_desc_mss_lo( - struct ena_eth_io_tx_meta_desc *p, - uint32_t val) +static inline void set_ena_eth_io_tx_meta_desc_mss_lo(struct ena_eth_io_tx_meta_desc *p, uint32_t val) { - p->word2 |= - (val << ENA_ETH_IO_TX_META_DESC_MSS_LO_SHIFT) - & ENA_ETH_IO_TX_META_DESC_MSS_LO_MASK; + p->word2 |= (val << ENA_ETH_IO_TX_META_DESC_MSS_LO_SHIFT) & ENA_ETH_IO_TX_META_DESC_MSS_LO_MASK; } -static inline uint8_t get_ena_eth_io_tx_cdesc_phase( - const struct ena_eth_io_tx_cdesc *p) +static inline uint8_t get_ena_eth_io_tx_cdesc_phase(const struct ena_eth_io_tx_cdesc *p) { return p->flags & ENA_ETH_IO_TX_CDESC_PHASE_MASK; } -static inline void set_ena_eth_io_tx_cdesc_phase( - struct ena_eth_io_tx_cdesc *p, - uint8_t val) +static inline void set_ena_eth_io_tx_cdesc_phase(struct ena_eth_io_tx_cdesc *p, uint8_t val) { p->flags |= val & ENA_ETH_IO_TX_CDESC_PHASE_MASK; } -static inline uint8_t get_ena_eth_io_rx_desc_phase( - const struct ena_eth_io_rx_desc *p) +static inline uint8_t get_ena_eth_io_rx_desc_phase(const struct ena_eth_io_rx_desc *p) { return p->ctrl & ENA_ETH_IO_RX_DESC_PHASE_MASK; } -static inline void set_ena_eth_io_rx_desc_phase( - struct ena_eth_io_rx_desc *p, - uint8_t val) +static inline void set_ena_eth_io_rx_desc_phase(struct ena_eth_io_rx_desc *p, uint8_t val) { p->ctrl |= val & ENA_ETH_IO_RX_DESC_PHASE_MASK; } -static inline uint8_t get_ena_eth_io_rx_desc_first( - const struct ena_eth_io_rx_desc *p) +static inline uint8_t get_ena_eth_io_rx_desc_first(const struct ena_eth_io_rx_desc *p) { - return (p->ctrl & ENA_ETH_IO_RX_DESC_FIRST_MASK) - >> ENA_ETH_IO_RX_DESC_FIRST_SHIFT; + return (p->ctrl & ENA_ETH_IO_RX_DESC_FIRST_MASK) >> ENA_ETH_IO_RX_DESC_FIRST_SHIFT; } -static inline void set_ena_eth_io_rx_desc_first( - struct ena_eth_io_rx_desc *p, - uint8_t val) +static inline void set_ena_eth_io_rx_desc_first(struct ena_eth_io_rx_desc *p, uint8_t val) { - p->ctrl |= - (val << ENA_ETH_IO_RX_DESC_FIRST_SHIFT) - & ENA_ETH_IO_RX_DESC_FIRST_MASK; + p->ctrl |= (val << ENA_ETH_IO_RX_DESC_FIRST_SHIFT) & ENA_ETH_IO_RX_DESC_FIRST_MASK; } -static inline uint8_t get_ena_eth_io_rx_desc_last( - const struct ena_eth_io_rx_desc *p) +static inline uint8_t get_ena_eth_io_rx_desc_last(const struct ena_eth_io_rx_desc *p) { - return (p->ctrl & ENA_ETH_IO_RX_DESC_LAST_MASK) - >> ENA_ETH_IO_RX_DESC_LAST_SHIFT; + return (p->ctrl & ENA_ETH_IO_RX_DESC_LAST_MASK) >> ENA_ETH_IO_RX_DESC_LAST_SHIFT; } -static inline void set_ena_eth_io_rx_desc_last( - struct ena_eth_io_rx_desc *p, - uint8_t val) +static inline void set_ena_eth_io_rx_desc_last(struct ena_eth_io_rx_desc *p, uint8_t val) { - p->ctrl |= - (val << ENA_ETH_IO_RX_DESC_LAST_SHIFT) - & ENA_ETH_IO_RX_DESC_LAST_MASK; + p->ctrl |= (val << ENA_ETH_IO_RX_DESC_LAST_SHIFT) & ENA_ETH_IO_RX_DESC_LAST_MASK; } -static inline uint8_t get_ena_eth_io_rx_desc_comp_req( - const struct ena_eth_io_rx_desc *p) +static inline uint8_t get_ena_eth_io_rx_desc_comp_req(const struct ena_eth_io_rx_desc *p) { - return (p->ctrl & ENA_ETH_IO_RX_DESC_COMP_REQ_MASK) - >> ENA_ETH_IO_RX_DESC_COMP_REQ_SHIFT; + return (p->ctrl & ENA_ETH_IO_RX_DESC_COMP_REQ_MASK) >> ENA_ETH_IO_RX_DESC_COMP_REQ_SHIFT; } -static inline void set_ena_eth_io_rx_desc_comp_req( - struct ena_eth_io_rx_desc *p, - uint8_t val) +static inline void set_ena_eth_io_rx_desc_comp_req(struct ena_eth_io_rx_desc *p, uint8_t val) { - p->ctrl |= - (val << ENA_ETH_IO_RX_DESC_COMP_REQ_SHIFT) - & ENA_ETH_IO_RX_DESC_COMP_REQ_MASK; + p->ctrl |= (val << ENA_ETH_IO_RX_DESC_COMP_REQ_SHIFT) & ENA_ETH_IO_RX_DESC_COMP_REQ_MASK; } -static inline uint32_t get_ena_eth_io_rx_cdesc_base_l3_proto_idx( - const struct ena_eth_io_rx_cdesc_base *p) +static inline uint32_t get_ena_eth_io_rx_cdesc_base_l3_proto_idx(const struct ena_eth_io_rx_cdesc_base *p) { return p->status & ENA_ETH_IO_RX_CDESC_BASE_L3_PROTO_IDX_MASK; } -static inline void set_ena_eth_io_rx_cdesc_base_l3_proto_idx( - struct ena_eth_io_rx_cdesc_base *p, - uint32_t val) +static inline void set_ena_eth_io_rx_cdesc_base_l3_proto_idx(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { p->status |= val & ENA_ETH_IO_RX_CDESC_BASE_L3_PROTO_IDX_MASK; } -static inline uint32_t get_ena_eth_io_rx_cdesc_base_src_vlan_cnt( - const struct ena_eth_io_rx_cdesc_base *p) +static inline uint32_t get_ena_eth_io_rx_cdesc_base_src_vlan_cnt(const struct ena_eth_io_rx_cdesc_base *p) { - return (p->status & ENA_ETH_IO_RX_CDESC_BASE_SRC_VLAN_CNT_MASK) - >> ENA_ETH_IO_RX_CDESC_BASE_SRC_VLAN_CNT_SHIFT; + return (p->status & ENA_ETH_IO_RX_CDESC_BASE_SRC_VLAN_CNT_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_SRC_VLAN_CNT_SHIFT; } -static inline void set_ena_eth_io_rx_cdesc_base_src_vlan_cnt( - struct ena_eth_io_rx_cdesc_base *p, - uint32_t val) +static inline void set_ena_eth_io_rx_cdesc_base_src_vlan_cnt(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { - p->status |= - (val << ENA_ETH_IO_RX_CDESC_BASE_SRC_VLAN_CNT_SHIFT) - & ENA_ETH_IO_RX_CDESC_BASE_SRC_VLAN_CNT_MASK; + p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_SRC_VLAN_CNT_SHIFT) & ENA_ETH_IO_RX_CDESC_BASE_SRC_VLAN_CNT_MASK; } -static inline uint32_t get_ena_eth_io_rx_cdesc_base_l4_proto_idx( - const struct ena_eth_io_rx_cdesc_base *p) +static inline uint32_t get_ena_eth_io_rx_cdesc_base_l4_proto_idx(const struct ena_eth_io_rx_cdesc_base *p) { - return (p->status & ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_MASK) - >> ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_SHIFT; + return (p->status & ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_SHIFT; } -static inline void set_ena_eth_io_rx_cdesc_base_l4_proto_idx( - struct ena_eth_io_rx_cdesc_base *p, uint32_t val) +static inline void set_ena_eth_io_rx_cdesc_base_l4_proto_idx(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { - p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_SHIFT) - & ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_MASK; + p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_SHIFT) & ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_MASK; } -static inline uint32_t get_ena_eth_io_rx_cdesc_base_l3_csum_err( - const struct ena_eth_io_rx_cdesc_base *p) +static inline uint32_t get_ena_eth_io_rx_cdesc_base_l3_csum_err(const struct ena_eth_io_rx_cdesc_base *p) { - return (p->status & ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_MASK) - >> ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_SHIFT; + return (p->status & ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_SHIFT; } -static inline void set_ena_eth_io_rx_cdesc_base_l3_csum_err( - struct ena_eth_io_rx_cdesc_base *p, uint32_t val) +static inline void set_ena_eth_io_rx_cdesc_base_l3_csum_err(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { - p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_SHIFT) - & ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_MASK; + p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_SHIFT) & ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_MASK; } -static inline uint32_t get_ena_eth_io_rx_cdesc_base_l4_csum_err( - const struct ena_eth_io_rx_cdesc_base *p) +static inline uint32_t get_ena_eth_io_rx_cdesc_base_l4_csum_err(const struct ena_eth_io_rx_cdesc_base *p) { - return (p->status & ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_ERR_MASK) - >> ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_ERR_SHIFT; + return (p->status & ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_ERR_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_ERR_SHIFT; } -static inline void set_ena_eth_io_rx_cdesc_base_l4_csum_err( - struct ena_eth_io_rx_cdesc_base *p, uint32_t val) +static inline void set_ena_eth_io_rx_cdesc_base_l4_csum_err(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { - p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_ERR_SHIFT) - & ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_ERR_MASK; + p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_ERR_SHIFT) & ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_ERR_MASK; } -static inline uint32_t get_ena_eth_io_rx_cdesc_base_ipv4_frag( - const struct ena_eth_io_rx_cdesc_base *p) +static inline uint32_t get_ena_eth_io_rx_cdesc_base_ipv4_frag(const struct ena_eth_io_rx_cdesc_base *p) { - return (p->status & ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_MASK) - >> ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_SHIFT; + return (p->status & ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_SHIFT; } -static inline void set_ena_eth_io_rx_cdesc_base_ipv4_frag( - struct ena_eth_io_rx_cdesc_base *p, uint32_t val) +static inline void set_ena_eth_io_rx_cdesc_base_ipv4_frag(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { - p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_SHIFT) - & ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_MASK; + p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_SHIFT) & ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_MASK; } -static inline uint32_t get_ena_eth_io_rx_cdesc_base_phase( - const struct ena_eth_io_rx_cdesc_base *p) +static inline uint32_t get_ena_eth_io_rx_cdesc_base_phase(const struct ena_eth_io_rx_cdesc_base *p) { - return (p->status & ENA_ETH_IO_RX_CDESC_BASE_PHASE_MASK) - >> ENA_ETH_IO_RX_CDESC_BASE_PHASE_SHIFT; + return (p->status & ENA_ETH_IO_RX_CDESC_BASE_PHASE_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_PHASE_SHIFT; } -static inline void set_ena_eth_io_rx_cdesc_base_phase( - struct ena_eth_io_rx_cdesc_base *p, uint32_t val) +static inline void set_ena_eth_io_rx_cdesc_base_phase(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { - p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_PHASE_SHIFT) - & ENA_ETH_IO_RX_CDESC_BASE_PHASE_MASK; + p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_PHASE_SHIFT) & ENA_ETH_IO_RX_CDESC_BASE_PHASE_MASK; } -static inline uint32_t get_ena_eth_io_rx_cdesc_base_l3_csum2( - const struct ena_eth_io_rx_cdesc_base *p) +static inline uint32_t get_ena_eth_io_rx_cdesc_base_l3_csum2(const struct ena_eth_io_rx_cdesc_base *p) { - return (p->status & ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM2_MASK) - >> ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM2_SHIFT; + return (p->status & ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM2_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM2_SHIFT; } -static inline void set_ena_eth_io_rx_cdesc_base_l3_csum2( - struct ena_eth_io_rx_cdesc_base *p, uint32_t val) +static inline void set_ena_eth_io_rx_cdesc_base_l3_csum2(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { - p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM2_SHIFT) - & ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM2_MASK; + p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM2_SHIFT) & ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM2_MASK; } -static inline uint32_t get_ena_eth_io_rx_cdesc_base_first( - const struct ena_eth_io_rx_cdesc_base *p) +static inline uint32_t get_ena_eth_io_rx_cdesc_base_first(const struct ena_eth_io_rx_cdesc_base *p) { - return (p->status & ENA_ETH_IO_RX_CDESC_BASE_FIRST_MASK) - >> ENA_ETH_IO_RX_CDESC_BASE_FIRST_SHIFT; + return (p->status & ENA_ETH_IO_RX_CDESC_BASE_FIRST_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_FIRST_SHIFT; } -static inline void set_ena_eth_io_rx_cdesc_base_first( - struct ena_eth_io_rx_cdesc_base *p, uint32_t val) +static inline void set_ena_eth_io_rx_cdesc_base_first(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { - p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_FIRST_SHIFT) - & ENA_ETH_IO_RX_CDESC_BASE_FIRST_MASK; + p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_FIRST_SHIFT) & ENA_ETH_IO_RX_CDESC_BASE_FIRST_MASK; } -static inline uint32_t get_ena_eth_io_rx_cdesc_base_last( - const struct ena_eth_io_rx_cdesc_base *p) +static inline uint32_t get_ena_eth_io_rx_cdesc_base_last(const struct ena_eth_io_rx_cdesc_base *p) { - return (p->status & ENA_ETH_IO_RX_CDESC_BASE_LAST_MASK) - >> ENA_ETH_IO_RX_CDESC_BASE_LAST_SHIFT; + return (p->status & ENA_ETH_IO_RX_CDESC_BASE_LAST_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_LAST_SHIFT; } -static inline void set_ena_eth_io_rx_cdesc_base_last( - struct ena_eth_io_rx_cdesc_base *p, uint32_t val) +static inline void set_ena_eth_io_rx_cdesc_base_last(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { - p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_LAST_SHIFT) - & ENA_ETH_IO_RX_CDESC_BASE_LAST_MASK; + p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_LAST_SHIFT) & ENA_ETH_IO_RX_CDESC_BASE_LAST_MASK; } -static inline uint32_t get_ena_eth_io_rx_cdesc_base_buffer( - const struct ena_eth_io_rx_cdesc_base *p) +static inline uint32_t get_ena_eth_io_rx_cdesc_base_buffer(const struct ena_eth_io_rx_cdesc_base *p) { - return (p->status & ENA_ETH_IO_RX_CDESC_BASE_BUFFER_MASK) - >> ENA_ETH_IO_RX_CDESC_BASE_BUFFER_SHIFT; + return (p->status & ENA_ETH_IO_RX_CDESC_BASE_BUFFER_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_BUFFER_SHIFT; } -static inline void set_ena_eth_io_rx_cdesc_base_buffer( - struct ena_eth_io_rx_cdesc_base *p, uint32_t val) +static inline void set_ena_eth_io_rx_cdesc_base_buffer(struct ena_eth_io_rx_cdesc_base *p, uint32_t val) { - p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_BUFFER_SHIFT) - & ENA_ETH_IO_RX_CDESC_BASE_BUFFER_MASK; + p->status |= (val << ENA_ETH_IO_RX_CDESC_BASE_BUFFER_SHIFT) & ENA_ETH_IO_RX_CDESC_BASE_BUFFER_MASK; } -static inline uint32_t get_ena_eth_io_intr_reg_rx_intr_delay( - const struct ena_eth_io_intr_reg *p) +static inline uint32_t get_ena_eth_io_intr_reg_rx_intr_delay(const struct ena_eth_io_intr_reg *p) { return p->intr_control & ENA_ETH_IO_INTR_REG_RX_INTR_DELAY_MASK; } -static inline void set_ena_eth_io_intr_reg_rx_intr_delay( - struct ena_eth_io_intr_reg *p, uint32_t val) +static inline void set_ena_eth_io_intr_reg_rx_intr_delay(struct ena_eth_io_intr_reg *p, uint32_t val) { p->intr_control |= val & ENA_ETH_IO_INTR_REG_RX_INTR_DELAY_MASK; } -static inline uint32_t get_ena_eth_io_intr_reg_tx_intr_delay( - const struct ena_eth_io_intr_reg *p) +static inline uint32_t get_ena_eth_io_intr_reg_tx_intr_delay(const struct ena_eth_io_intr_reg *p) { - return (p->intr_control & ENA_ETH_IO_INTR_REG_TX_INTR_DELAY_MASK) - >> ENA_ETH_IO_INTR_REG_TX_INTR_DELAY_SHIFT; + return (p->intr_control & ENA_ETH_IO_INTR_REG_TX_INTR_DELAY_MASK) >> ENA_ETH_IO_INTR_REG_TX_INTR_DELAY_SHIFT; } -static inline void set_ena_eth_io_intr_reg_tx_intr_delay( - struct ena_eth_io_intr_reg *p, uint32_t val) +static inline void set_ena_eth_io_intr_reg_tx_intr_delay(struct ena_eth_io_intr_reg *p, uint32_t val) { - p->intr_control |= (val << ENA_ETH_IO_INTR_REG_TX_INTR_DELAY_SHIFT) - & ENA_ETH_IO_INTR_REG_TX_INTR_DELAY_MASK; + p->intr_control |= (val << ENA_ETH_IO_INTR_REG_TX_INTR_DELAY_SHIFT) & ENA_ETH_IO_INTR_REG_TX_INTR_DELAY_MASK; } -static inline uint32_t get_ena_eth_io_intr_reg_intr_unmask( - const struct ena_eth_io_intr_reg *p) +static inline uint32_t get_ena_eth_io_intr_reg_intr_unmask(const struct ena_eth_io_intr_reg *p) { - return (p->intr_control & ENA_ETH_IO_INTR_REG_INTR_UNMASK_MASK) - >> ENA_ETH_IO_INTR_REG_INTR_UNMASK_SHIFT; + return (p->intr_control & ENA_ETH_IO_INTR_REG_INTR_UNMASK_MASK) >> ENA_ETH_IO_INTR_REG_INTR_UNMASK_SHIFT; } -static inline void set_ena_eth_io_intr_reg_intr_unmask( - struct ena_eth_io_intr_reg *p, uint32_t val) +static inline void set_ena_eth_io_intr_reg_intr_unmask(struct ena_eth_io_intr_reg *p, uint32_t val) { - p->intr_control |= (val << ENA_ETH_IO_INTR_REG_INTR_UNMASK_SHIFT) - & ENA_ETH_IO_INTR_REG_INTR_UNMASK_MASK; + p->intr_control |= (val << ENA_ETH_IO_INTR_REG_INTR_UNMASK_SHIFT) & ENA_ETH_IO_INTR_REG_INTR_UNMASK_MASK; } -static inline uint32_t get_ena_eth_io_numa_node_cfg_reg_numa( - const struct ena_eth_io_numa_node_cfg_reg *p) +static inline uint32_t get_ena_eth_io_numa_node_cfg_reg_numa(const struct ena_eth_io_numa_node_cfg_reg *p) { return p->numa_cfg & ENA_ETH_IO_NUMA_NODE_CFG_REG_NUMA_MASK; } -static inline void set_ena_eth_io_numa_node_cfg_reg_numa( - struct ena_eth_io_numa_node_cfg_reg *p, uint32_t val) +static inline void set_ena_eth_io_numa_node_cfg_reg_numa(struct ena_eth_io_numa_node_cfg_reg *p, uint32_t val) { p->numa_cfg |= val & ENA_ETH_IO_NUMA_NODE_CFG_REG_NUMA_MASK; } -static inline uint32_t get_ena_eth_io_numa_node_cfg_reg_enabled( - const struct ena_eth_io_numa_node_cfg_reg *p) +static inline uint32_t get_ena_eth_io_numa_node_cfg_reg_enabled(const struct ena_eth_io_numa_node_cfg_reg *p) { - return (p->numa_cfg & ENA_ETH_IO_NUMA_NODE_CFG_REG_ENABLED_MASK) - >> ENA_ETH_IO_NUMA_NODE_CFG_REG_ENABLED_SHIFT; + return (p->numa_cfg & ENA_ETH_IO_NUMA_NODE_CFG_REG_ENABLED_MASK) >> ENA_ETH_IO_NUMA_NODE_CFG_REG_ENABLED_SHIFT; } -static inline void set_ena_eth_io_numa_node_cfg_reg_enabled( - struct ena_eth_io_numa_node_cfg_reg *p, uint32_t val) +static inline void set_ena_eth_io_numa_node_cfg_reg_enabled(struct ena_eth_io_numa_node_cfg_reg *p, uint32_t val) { - p->numa_cfg |= (val << ENA_ETH_IO_NUMA_NODE_CFG_REG_ENABLED_SHIFT) - & ENA_ETH_IO_NUMA_NODE_CFG_REG_ENABLED_MASK; + p->numa_cfg |= (val << ENA_ETH_IO_NUMA_NODE_CFG_REG_ENABLED_SHIFT) & ENA_ETH_IO_NUMA_NODE_CFG_REG_ENABLED_MASK; } #endif /* !defined(ENA_DEFS_LINUX_MAINLINE) */ diff --git a/drivers/net/ena/base/ena_defs/ena_gen_info.h b/drivers/net/ena/base/ena_defs/ena_gen_info.h index 3d2520963..e87bcfd88 100644 --- a/drivers/net/ena/base/ena_defs/ena_gen_info.h +++ b/drivers/net/ena/base/ena_defs/ena_gen_info.h @@ -31,5 +31,5 @@ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ -#define ENA_GEN_DATE "Sun Jun 5 10:24:39 IDT 2016" -#define ENA_GEN_COMMIT "17146ed" +#define ENA_GEN_DATE "Sun Oct 23 12:27:32 IDT 2016" +#define ENA_GEN_COMMIT "79d82fa" diff --git a/drivers/net/ena/base/ena_defs/ena_includes.h b/drivers/net/ena/base/ena_defs/ena_includes.h index a86c876fe..30a920a8e 100644 --- a/drivers/net/ena/base/ena_defs/ena_includes.h +++ b/drivers/net/ena/base/ena_defs/ena_includes.h @@ -35,5 +35,3 @@ #include "ena_regs_defs.h" #include "ena_admin_defs.h" #include "ena_eth_io_defs.h" -#include "ena_efa_admin_defs.h" -#include "ena_efa_io_defs.h" diff --git a/drivers/net/ena/base/ena_defs/ena_regs_defs.h b/drivers/net/ena/base/ena_defs/ena_regs_defs.h index d02412781..b0870f254 100644 --- a/drivers/net/ena/base/ena_defs/ena_regs_defs.h +++ b/drivers/net/ena/base/ena_defs/ena_regs_defs.h @@ -34,6 +34,38 @@ #ifndef _ENA_REGS_H_ #define _ENA_REGS_H_ +enum ena_regs_reset_reason_types { + ENA_REGS_RESET_NORMAL = 0, + + ENA_REGS_RESET_KEEP_ALIVE_TO = 1, + + ENA_REGS_RESET_ADMIN_TO = 2, + + ENA_REGS_RESET_MISS_TX_CMPL = 3, + + ENA_REGS_RESET_INV_RX_REQ_ID = 4, + + ENA_REGS_RESET_INV_TX_REQ_ID = 5, + + ENA_REGS_RESET_TOO_MANY_RX_DESCS = 6, + + ENA_REGS_RESET_INIT_ERR = 7, + + ENA_REGS_RESET_DRIVER_INVALID_STATE = 8, + + ENA_REGS_RESET_OS_TRIGGER = 9, + + ENA_REGS_RESET_OS_NETDEV_WD = 10, + + ENA_REGS_RESET_SHUTDOWN = 11, + + ENA_REGS_RESET_USER_TRIGGER = 12, + + ENA_REGS_RESET_GENERIC = 13, + + ENA_REGS_RESET_MISS_INTERRUPT = 14, +}; + /* ena_registers offsets */ #define ENA_REGS_VERSION_OFF 0x0 #define ENA_REGS_CONTROLLER_VERSION_OFF 0x4 @@ -80,6 +112,8 @@ #define ENA_REGS_CAPS_RESET_TIMEOUT_MASK 0x3e #define ENA_REGS_CAPS_DMA_ADDR_WIDTH_SHIFT 8 #define ENA_REGS_CAPS_DMA_ADDR_WIDTH_MASK 0xff00 +#define ENA_REGS_CAPS_ADMIN_CMD_TO_SHIFT 16 +#define ENA_REGS_CAPS_ADMIN_CMD_TO_MASK 0xf0000 /* aq_caps register */ #define ENA_REGS_AQ_CAPS_AQ_DEPTH_MASK 0xffff @@ -104,6 +138,8 @@ #define ENA_REGS_DEV_CTL_QUIESCENT_MASK 0x4 #define ENA_REGS_DEV_CTL_IO_RESUME_SHIFT 3 #define ENA_REGS_DEV_CTL_IO_RESUME_MASK 0x8 +#define ENA_REGS_DEV_CTL_RESET_REASON_SHIFT 28 +#define ENA_REGS_DEV_CTL_RESET_REASON_MASK 0xf0000000 /* dev_sts register */ #define ENA_REGS_DEV_STS_READY_MASK 0x1 diff --git a/drivers/net/ena/base/ena_eth_com.c b/drivers/net/ena/base/ena_eth_com.c index 290a5666f..4c4989a3f 100644 --- a/drivers/net/ena/base/ena_eth_com.c +++ b/drivers/net/ena/base/ena_eth_com.c @@ -43,11 +43,10 @@ static inline struct ena_eth_io_rx_cdesc_base *ena_com_get_next_rx_cdesc( head_masked = io_cq->head & (io_cq->q_depth - 1); expected_phase = io_cq->phase; - cdesc = (struct ena_eth_io_rx_cdesc_base *) - ((unsigned char *)io_cq->cdesc_addr.virt_addr + cdesc = (struct ena_eth_io_rx_cdesc_base *)(io_cq->cdesc_addr.virt_addr + (head_masked * io_cq->cdesc_entry_size_in_bytes)); - desc_phase = (cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_PHASE_MASK) >> + desc_phase = (READ_ONCE(cdesc->status) & ENA_ETH_IO_RX_CDESC_BASE_PHASE_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_PHASE_SHIFT; if (desc_phase != expected_phase) @@ -74,7 +73,7 @@ static inline void *get_sq_desc(struct ena_com_io_sq *io_sq) offset = tail_masked * io_sq->desc_entry_size; - return (unsigned char *)io_sq->desc_addr.virt_addr + offset; + return (void *)((uintptr_t)io_sq->desc_addr.virt_addr + offset); } static inline void ena_com_copy_curr_sq_desc_to_dev(struct ena_com_io_sq *io_sq) @@ -86,8 +85,8 @@ static inline void ena_com_copy_curr_sq_desc_to_dev(struct ena_com_io_sq *io_sq) if (io_sq->mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_HOST) return; - memcpy_toio((unsigned char *)io_sq->desc_addr.pbuf_dev_addr + offset, - (unsigned char *)io_sq->desc_addr.virt_addr + offset, + memcpy_toio(io_sq->desc_addr.pbuf_dev_addr + offset, + io_sq->desc_addr.virt_addr + offset, io_sq->desc_entry_size); } @@ -125,11 +124,11 @@ static inline struct ena_eth_io_rx_cdesc_base * { idx &= (io_cq->q_depth - 1); return (struct ena_eth_io_rx_cdesc_base *) - ((unsigned char *)io_cq->cdesc_addr.virt_addr + - idx * io_cq->cdesc_entry_size_in_bytes); + ((uintptr_t)io_cq->cdesc_addr.virt_addr + + idx * io_cq->cdesc_entry_size_in_bytes); } -static inline int ena_com_cdesc_rx_pkt_get(struct ena_com_io_cq *io_cq, +static inline u16 ena_com_cdesc_rx_pkt_get(struct ena_com_io_cq *io_cq, u16 *first_cdesc_idx) { struct ena_eth_io_rx_cdesc_base *cdesc; @@ -143,7 +142,7 @@ static inline int ena_com_cdesc_rx_pkt_get(struct ena_com_io_cq *io_cq, ena_com_cq_inc_head(io_cq); count++; - last = (cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_LAST_MASK) >> + last = (READ_ONCE(cdesc->status) & ENA_ETH_IO_RX_CDESC_BASE_LAST_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_LAST_SHIFT; } while (!last); @@ -183,9 +182,8 @@ static inline bool ena_com_meta_desc_changed(struct ena_com_io_sq *io_sq, return false; } -static inline void ena_com_create_and_store_tx_meta_desc( - struct ena_com_io_sq *io_sq, - struct ena_com_tx_ctx *ena_tx_ctx) +static inline void ena_com_create_and_store_tx_meta_desc(struct ena_com_io_sq *io_sq, + struct ena_com_tx_ctx *ena_tx_ctx) { struct ena_eth_io_tx_meta_desc *meta_desc = NULL; struct ena_com_tx_meta *ena_meta = &ena_tx_ctx->ena_meta; @@ -203,8 +201,8 @@ static inline void ena_com_create_and_store_tx_meta_desc( ENA_ETH_IO_TX_META_DESC_MSS_LO_MASK; /* bits 10-13 of the mss */ meta_desc->len_ctrl |= ((ena_meta->mss >> 10) << - ENA_ETH_IO_TX_META_DESC_MSS_HI_PTP_SHIFT) & - ENA_ETH_IO_TX_META_DESC_MSS_HI_PTP_MASK; + ENA_ETH_IO_TX_META_DESC_MSS_HI_SHIFT) & + ENA_ETH_IO_TX_META_DESC_MSS_HI_MASK; /* Extended meta desc */ meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_MASK; @@ -237,11 +235,11 @@ static inline void ena_com_create_and_store_tx_meta_desc( static inline void ena_com_rx_set_flags(struct ena_com_rx_ctx *ena_rx_ctx, struct ena_eth_io_rx_cdesc_base *cdesc) { - ena_rx_ctx->l3_proto = (enum ena_eth_io_l3_proto_index)(cdesc->status & - ENA_ETH_IO_RX_CDESC_BASE_L3_PROTO_IDX_MASK); - ena_rx_ctx->l4_proto = (enum ena_eth_io_l4_proto_index) - ((cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_MASK) >> - ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_SHIFT); + ena_rx_ctx->l3_proto = cdesc->status & + ENA_ETH_IO_RX_CDESC_BASE_L3_PROTO_IDX_MASK; + ena_rx_ctx->l4_proto = + (cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_MASK) >> + ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_SHIFT; ena_rx_ctx->l3_csum_err = (cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_SHIFT; @@ -280,8 +278,8 @@ int ena_com_prepare_tx(struct ena_com_io_sq *io_sq, bool have_meta; u64 addr_hi; - ENA_ASSERT(io_sq->direction == ENA_COM_IO_QUEUE_DIRECTION_TX, - "wrong Q type"); + ENA_WARN(io_sq->direction != ENA_COM_IO_QUEUE_DIRECTION_TX, + "wrong Q type"); /* num_bufs +1 for potential meta desc */ if (ena_com_sq_empty_space(io_sq) < (num_bufs + 1)) { @@ -410,8 +408,8 @@ int ena_com_rx_pkt(struct ena_com_io_cq *io_cq, u16 nb_hw_desc; u16 i; - ENA_ASSERT(io_cq->direction == ENA_COM_IO_QUEUE_DIRECTION_RX, - "wrong Q type"); + ENA_WARN(io_cq->direction != ENA_COM_IO_QUEUE_DIRECTION_RX, + "wrong Q type"); nb_hw_desc = ena_com_cdesc_rx_pkt_get(io_cq, &cdesc_idx); if (nb_hw_desc == 0) { @@ -455,8 +453,8 @@ int ena_com_add_single_rx_desc(struct ena_com_io_sq *io_sq, { struct ena_eth_io_rx_desc *desc; - ENA_ASSERT(io_sq->direction == ENA_COM_IO_QUEUE_DIRECTION_RX, - "wrong Q type"); + ENA_WARN(io_sq->direction != ENA_COM_IO_QUEUE_DIRECTION_RX, + "wrong Q type"); if (unlikely(ena_com_sq_empty_space(io_sq) == 0)) return ENA_COM_NO_SPACE; @@ -475,8 +473,7 @@ int ena_com_add_single_rx_desc(struct ena_com_io_sq *io_sq, desc->buff_addr_lo = (u32)ena_buf->paddr; desc->buff_addr_hi = - ((ena_buf->paddr & - GENMASK_ULL(io_sq->dma_addr_bits - 1, 32)) >> 32); + ((ena_buf->paddr & GENMASK_ULL(io_sq->dma_addr_bits - 1, 32)) >> 32); ena_com_sq_update_tail(io_sq); @@ -493,20 +490,37 @@ int ena_com_tx_comp_req_id_get(struct ena_com_io_cq *io_cq, u16 *req_id) expected_phase = io_cq->phase; cdesc = (struct ena_eth_io_tx_cdesc *) - ((unsigned char *)io_cq->cdesc_addr.virt_addr - + (masked_head * io_cq->cdesc_entry_size_in_bytes)); + ((uintptr_t)io_cq->cdesc_addr.virt_addr + + (masked_head * io_cq->cdesc_entry_size_in_bytes)); /* When the current completion descriptor phase isn't the same as the * expected, it mean that the device still didn't update * this completion. */ - cdesc_phase = cdesc->flags & ENA_ETH_IO_TX_CDESC_PHASE_MASK; + cdesc_phase = READ_ONCE(cdesc->flags) & ENA_ETH_IO_TX_CDESC_PHASE_MASK; if (cdesc_phase != expected_phase) return ENA_COM_TRY_AGAIN; + if (unlikely(cdesc->req_id >= io_cq->q_depth)) { + ena_trc_err("Invalid req id %d\n", cdesc->req_id); + return ENA_COM_INVAL; + } + ena_com_cq_inc_head(io_cq); - *req_id = cdesc->req_id; + *req_id = READ_ONCE(cdesc->req_id); return 0; } + +bool ena_com_cq_empty(struct ena_com_io_cq *io_cq) +{ + struct ena_eth_io_rx_cdesc_base *cdesc; + + cdesc = ena_com_get_next_rx_cdesc(io_cq); + if (cdesc) + return false; + else + return true; +} + diff --git a/drivers/net/ena/base/ena_eth_com.h b/drivers/net/ena/base/ena_eth_com.h index 71a880c0f..56ea4ae64 100644 --- a/drivers/net/ena/base/ena_eth_com.h +++ b/drivers/net/ena/base/ena_eth_com.h @@ -92,10 +92,12 @@ int ena_com_add_single_rx_desc(struct ena_com_io_sq *io_sq, int ena_com_tx_comp_req_id_get(struct ena_com_io_cq *io_cq, u16 *req_id); +bool ena_com_cq_empty(struct ena_com_io_cq *io_cq); + static inline void ena_com_unmask_intr(struct ena_com_io_cq *io_cq, struct ena_eth_io_intr_reg *intr_reg) { - ENA_REG_WRITE32(intr_reg->intr_control, io_cq->unmask_reg); + ENA_REG_WRITE32(io_cq->bus, intr_reg->intr_control, io_cq->unmask_reg); } static inline int ena_com_sq_empty_space(struct ena_com_io_sq *io_sq) @@ -118,7 +120,7 @@ static inline int ena_com_write_sq_doorbell(struct ena_com_io_sq *io_sq) ena_trc_dbg("write submission queue doorbell for queue: %d tail: %d\n", io_sq->qid, tail); - ENA_REG_WRITE32(tail, io_sq->db_addr); + ENA_REG_WRITE32(io_sq->bus, tail, io_sq->db_addr); return 0; } @@ -135,7 +137,7 @@ static inline int ena_com_update_dev_comp_head(struct ena_com_io_cq *io_cq) if (io_cq->cq_head_db_reg && need_update) { ena_trc_dbg("Write completion queue doorbell for queue %d: head: %d\n", io_cq->qid, head); - ENA_REG_WRITE32(head, io_cq->cq_head_db_reg); + ENA_REG_WRITE32(io_cq->bus, head, io_cq->cq_head_db_reg); io_cq->last_head_update = head; } @@ -153,7 +155,7 @@ static inline void ena_com_update_numa_node(struct ena_com_io_cq *io_cq, numa_cfg.numa_cfg = (numa_node & ENA_ETH_IO_NUMA_NODE_CFG_REG_NUMA_MASK) | ENA_ETH_IO_NUMA_NODE_CFG_REG_ENABLED_MASK; - ENA_REG_WRITE32(numa_cfg.numa_cfg, io_cq->numa_node_cfg_reg); + ENA_REG_WRITE32(io_cq->bus, numa_cfg.numa_cfg, io_cq->numa_node_cfg_reg); } static inline void ena_com_comp_ack(struct ena_com_io_sq *io_sq, u16 elem) diff --git a/drivers/net/ena/base/ena_plat.h b/drivers/net/ena/base/ena_plat.h index b5b645456..278175f39 100644 --- a/drivers/net/ena/base/ena_plat.h +++ b/drivers/net/ena/base/ena_plat.h @@ -42,8 +42,6 @@ #else #include "ena_plat_dpdk.h" #endif -#elif defined(__FreeBSD__) -#include "ena_plat_dpdk.h" #elif defined(_WIN32) #include "ena_plat_windows.h" #else diff --git a/drivers/net/ena/base/ena_plat_dpdk.h b/drivers/net/ena/base/ena_plat_dpdk.h index 93345199a..e2af4ee2c 100644 --- a/drivers/net/ena/base/ena_plat_dpdk.h +++ b/drivers/net/ena/base/ena_plat_dpdk.h @@ -73,10 +73,10 @@ typedef uint64_t dma_addr_t; #define ENA_COM_INVAL -EINVAL #define ENA_COM_NO_SPACE -ENOSPC #define ENA_COM_NO_DEVICE -ENODEV -#define ENA_COM_PERMISSION -EPERM #define ENA_COM_TIMER_EXPIRED -ETIME #define ENA_COM_FAULT -EFAULT #define ENA_COM_TRY_AGAIN -EAGAIN +#define ENA_COM_UNSUPPORTED -EOPNOTSUPP #define ____cacheline_aligned __rte_cache_aligned @@ -138,6 +138,15 @@ typedef uint64_t dma_addr_t; #define ena_trc_err(format, arg...) do { } while (0) #endif /* RTE_LIBRTE_ENA_COM_DEBUG */ +#define ENA_WARN(cond, format, arg...) \ +do { \ + if (unlikely(cond)) { \ + ena_trc_err( \ + "Warn failed on %s:%s:%d:" format, \ + __FILE__, __func__, __LINE__, ##arg); \ + } \ +} while (0) + /* Spinlock related methods */ #define ena_spinlock_t rte_spinlock_t #define ENA_SPINLOCK_INIT(spinlock) rte_spinlock_init(&spinlock) @@ -177,10 +186,21 @@ typedef uint64_t dma_addr_t; #define ENA_WAIT_EVENT_SIGNAL(waitevent) pthread_cond_signal(&waitevent.cond) /* pthread condition doesn't need to be rearmed after usage */ #define ENA_WAIT_EVENT_CLEAR(...) +#define ENA_WAIT_EVENT_DESTROY(waitqueue) ((void)(waitqueue)) #define ena_wait_event_t ena_wait_queue_t #define ENA_MIGHT_SLEEP() +#define ENA_TIME_EXPIRE(timeout) (timeout < rte_get_timer_cycles()) +#define ENA_GET_SYSTEM_TIMEOUT(timeout_us) \ + (timeout_us * rte_get_timer_hz() / 1000000 + rte_get_timer_cycles()) + +/* + * Each rte_memzone should have unique name. + * To satisfy it, count number of allocations and add it to name. + */ +extern uint32_t ena_alloc_cnt; + #define ENA_MEM_ALLOC_COHERENT(dmadev, size, virt, phys, handle) \ do { \ const struct rte_memzone *mz; \ @@ -200,7 +220,8 @@ typedef uint64_t dma_addr_t; ENA_TOUCH(dmadev); \ rte_memzone_free(handle); }) -#define ENA_MEM_ALLOC_COHERENT_NODE(dmadev, size, virt, phys, node, dev_node) \ +#define ENA_MEM_ALLOC_COHERENT_NODE( \ + dmadev, size, virt, phys, mem_handle, node, dev_node) \ do { \ const struct rte_memzone *mz; \ char z_name[RTE_MEMZONE_NAMESIZE]; \ @@ -212,6 +233,7 @@ typedef uint64_t dma_addr_t; memset(mz->addr, 0, size); \ virt = mz->addr; \ phys = mz->iova; \ + (void)mem_handle; \ } while (0) #define ENA_MEM_ALLOC_NODE(dmadev, size, virt, node, dev_node) \ @@ -230,8 +252,10 @@ typedef uint64_t dma_addr_t; #define ENA_MEM_ALLOC(dmadev, size) rte_zmalloc(NULL, size, 1) #define ENA_MEM_FREE(dmadev, ptr) ({ENA_TOUCH(dmadev); rte_free(ptr); }) -#define ENA_REG_WRITE32(value, reg) rte_write32_relaxed((value), (reg)) -#define ENA_REG_READ32(reg) rte_read32_relaxed((reg)) +#define ENA_REG_WRITE32(bus, value, reg) \ + ({ (void)(bus); rte_write32_relaxed((value), (reg)); }) +#define ENA_REG_READ32(bus, reg) \ + ({ (void)(bus); rte_read32_relaxed((reg)); }) #define ATOMIC32_INC(i32_ptr) rte_atomic32_inc(i32_ptr) #define ATOMIC32_DEC(i32_ptr) rte_atomic32_dec(i32_ptr) @@ -247,4 +271,11 @@ typedef uint64_t dma_addr_t; #define PTR_ERR(error) ((long)(void *)error) #define might_sleep() +#define lower_32_bits(x) ((uint32_t)(x)) +#define upper_32_bits(x) ((uint32_t)(((x) >> 16) >> 16)) + +#ifndef READ_ONCE +#define READ_ONCE(var) (*((volatile typeof(var) *)(&(var)))) +#endif + #endif /* DPDK_ENA_COM_ENA_PLAT_DPDK_H_ */ diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 190ed40d1..71d6838a0 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -114,6 +114,12 @@ struct ena_stats { #define ENA_STAT_GLOBAL_ENTRY(stat) \ ENA_STAT_ENTRY(stat, dev) +/* + * Each rte_memzone should have unique name. + * To satisfy it, count number of allocation and add it to name. + */ +uint32_t ena_alloc_cnt; + static const struct ena_stats ena_stats_global_strings[] = { ENA_STAT_GLOBAL_ENTRY(tx_timeout), ENA_STAT_GLOBAL_ENTRY(io_suspend), @@ -195,6 +201,8 @@ static const struct rte_pci_id pci_id_ena_map[] = { { .device_id = 0 }, }; +static struct ena_aenq_handlers empty_aenq_handlers; + static int ena_device_init(struct ena_com_dev *ena_dev, struct ena_com_dev_get_features_ctx *get_feat_ctx); static int ena_dev_configure(struct rte_eth_dev *dev); @@ -346,9 +354,6 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf, ena_meta->mss = mbuf->tso_segsz; ena_meta->l3_hdr_len = mbuf->l3_len; ena_meta->l3_hdr_offset = mbuf->l2_len; - /* this param needed only for TSO */ - ena_meta->l3_outer_hdr_len = 0; - ena_meta->l3_outer_hdr_offset = 0; ena_tx_ctx->meta_valid = true; } else { @@ -388,7 +393,7 @@ static void ena_config_host_info(struct ena_com_dev *ena_dev) rc = ena_com_set_host_attributes(ena_dev); if (rc) { RTE_LOG(ERR, PMD, "Cannot set host attributes\n"); - if (rc != -EPERM) + if (rc != -ENA_COM_UNSUPPORTED) goto err; } @@ -441,7 +446,7 @@ static void ena_config_debug_area(struct ena_adapter *adapter) rc = ena_com_set_host_attributes(&adapter->ena_dev); if (rc) { RTE_LOG(WARNING, PMD, "Cannot set host attributes\n"); - if (rc != -EPERM) + if (rc != -ENA_COM_UNSUPPORTED) goto err; } @@ -496,7 +501,7 @@ static int ena_rss_reta_update(struct rte_eth_dev *dev, ret = ena_com_indirect_table_fill_entry(ena_dev, i, entry_value); - if (unlikely(ret && (ret != ENA_COM_PERMISSION))) { + if (unlikely(ret && (ret != ENA_COM_UNSUPPORTED))) { RTE_LOG(ERR, PMD, "Cannot fill indirect table\n"); ret = -ENOTSUP; @@ -506,7 +511,7 @@ static int ena_rss_reta_update(struct rte_eth_dev *dev, } ret = ena_com_indirect_table_set(ena_dev); - if (unlikely(ret && (ret != ENA_COM_PERMISSION))) { + if (unlikely(ret && (ret != ENA_COM_UNSUPPORTED))) { RTE_LOG(ERR, PMD, "Cannot flush the indirect table\n"); ret = -ENOTSUP; goto err; @@ -537,7 +542,7 @@ static int ena_rss_reta_query(struct rte_eth_dev *dev, return -EINVAL; ret = ena_com_indirect_table_get(ena_dev, indirect_table); - if (unlikely(ret && (ret != ENA_COM_PERMISSION))) { + if (unlikely(ret && (ret != ENA_COM_UNSUPPORTED))) { RTE_LOG(ERR, PMD, "cannot get indirect table\n"); ret = -ENOTSUP; goto err; @@ -571,7 +576,7 @@ static int ena_rss_init_default(struct ena_adapter *adapter) val = i % nb_rx_queues; rc = ena_com_indirect_table_fill_entry(ena_dev, i, ENA_IO_RXQ_IDX(val)); - if (unlikely(rc && (rc != ENA_COM_PERMISSION))) { + if (unlikely(rc && (rc != ENA_COM_UNSUPPORTED))) { RTE_LOG(ERR, PMD, "Cannot fill indirect table\n"); goto err_fill_indir; } @@ -579,19 +584,19 @@ static int ena_rss_init_default(struct ena_adapter *adapter) rc = ena_com_fill_hash_function(ena_dev, ENA_ADMIN_CRC32, NULL, ENA_HASH_KEY_SIZE, 0xFFFFFFFF); - if (unlikely(rc && (rc != ENA_COM_PERMISSION))) { + if (unlikely(rc && (rc != ENA_COM_UNSUPPORTED))) { RTE_LOG(INFO, PMD, "Cannot fill hash function\n"); goto err_fill_indir; } rc = ena_com_set_default_hash_ctrl(ena_dev); - if (unlikely(rc && (rc != ENA_COM_PERMISSION))) { + if (unlikely(rc && (rc != ENA_COM_UNSUPPORTED))) { RTE_LOG(INFO, PMD, "Cannot fill hash control\n"); goto err_fill_indir; } rc = ena_com_indirect_table_set(ena_dev); - if (unlikely(rc && (rc != ENA_COM_PERMISSION))) { + if (unlikely(rc && (rc != ENA_COM_UNSUPPORTED))) { RTE_LOG(ERR, PMD, "Cannot flush the indirect table\n"); goto err_fill_indir; } @@ -1236,7 +1241,7 @@ static int ena_device_init(struct ena_com_dev *ena_dev, ena_com_set_mmio_read_mode(ena_dev, readless_supported); /* reset device */ - rc = ena_com_dev_reset(ena_dev); + rc = ena_com_dev_reset(ena_dev, ENA_REGS_RESET_NORMAL); if (rc) { RTE_LOG(ERR, PMD, "cannot reset device\n"); goto err_mmio_read_less; @@ -1252,7 +1257,7 @@ static int ena_device_init(struct ena_com_dev *ena_dev, ena_dev->dma_addr_bits = ena_com_get_dma_width(ena_dev); /* ENA device administration layer init */ - rc = ena_com_admin_init(ena_dev, NULL, true); + rc = ena_com_admin_init(ena_dev, &empty_aenq_handlers, true); if (rc) { RTE_LOG(ERR, PMD, "cannot initialize ena admin queue with device\n"); @@ -1854,3 +1859,24 @@ ena_init_log(void) if (ena_logtype_driver >= 0) rte_log_set_level(ena_logtype_driver, RTE_LOG_NOTICE); } + +/****************************************************************************** + ******************************** AENQ Handlers ******************************* + *****************************************************************************/ +/** + * This handler will called for unknown event group or unimplemented handlers + **/ +static void unimplemented_aenq_handler(__rte_unused void *data, + __rte_unused struct ena_admin_aenq_entry *aenq_e) +{ + // Unimplemented handler +} + +static struct ena_aenq_handlers empty_aenq_handlers = { + .handlers = { + [ENA_ADMIN_LINK_CHANGE] = unimplemented_aenq_handler, + [ENA_ADMIN_NOTIFICATION] = unimplemented_aenq_handler, + [ENA_ADMIN_KEEP_ALIVE] = unimplemented_aenq_handler + }, + .unimplemented_handler = unimplemented_aenq_handler +}; From patchwork Thu Jun 7 09:42:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40715 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 319CB1B172; Thu, 7 Jun 2018 11:43:37 +0200 (CEST) Received: from mail-lf0-f67.google.com (mail-lf0-f67.google.com [209.85.215.67]) by dpdk.org (Postfix) with ESMTP id 1A3E41B052 for ; Thu, 7 Jun 2018 11:43:33 +0200 (CEST) Received: by mail-lf0-f67.google.com with SMTP id i15-v6so3336620lfc.2 for ; Thu, 07 Jun 2018 02:43:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=SKw514H8Bvh6MK508oguM3Pu0V1eoGhzBBXemJPRnmo=; b=NjKfoql1OkS+vhVQkkrXj+mrLqw+IjeZfZoQcOWJCkHyfybxZHBCpWEo+pD5DtQdys hTJbTR+addrLpGagd1invu6VlTaj5wQnScgoWlgTLRhYBnfNxtHub0JnIvsMqD5hZKi4 LMlGStJ2p9ZZAAGmuHQIvWgpEi8I8Br/ervWQZW32LR5IYs9uh4qjnkaQKRcbmUPxvhv h4pAbOOrDY1juF7TAKf0+fwEOAAZEKsGsUA2utJylEiMz3Ytogbq9T450jmSp6Aask1n 3cuf6o0akTuy4pMa7rJwC+t1bOvKCJr8Gu67SQZvh5cBbRK5ckK/XXD/GXbX0AStDbdS Cz2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=SKw514H8Bvh6MK508oguM3Pu0V1eoGhzBBXemJPRnmo=; b=FztkB0Z1PkHkuydotvFeom7d2iDgNZjtne7ofRP5duB8Vh35LhMStW3OXU+J6GHUGB Osu0hk1Rsje4Wn0afs/LOZgOfGgzh/c8sQssR/tFgt6jod+ck1cBrIDCa1ZavhMZ7S/K D6o8szCRRqRryXOU/bOECkJEeZrvaKOLJVvNUiKVSpuEfE/K06zcQ6QI/iHoU2B5H7A2 5Un556Rt0mqi+jct4fF2K3iZGZi0jlT7gU+QwfNtCFJDJqDtbCnrHq/U9nc0eVeAfVlW 865zPsBI+P30e5UzBMmH9DE0zgJgm4CLUNqvvYwxskqHcKdBXOajhYJ0l+asDIBZnlv/ h/nA== X-Gm-Message-State: APt69E2JzOmjqL6eLfK3bQEwM3iYHOPDbvck/x0m6KL2AHSwiuexwdLR iPXFkzh0wOK6kx4DloF3GnStOg== X-Google-Smtp-Source: ADUXVKIp7jROScNHerhhzM+a0wGIR4glLIot6j3QBF6EYLX7TH+Me/49aZiUbckz4/sHqF8L20SCNg== X-Received: by 2002:a2e:9a44:: with SMTP id k4-v6mr961199ljj.17.1528364612273; Thu, 07 Jun 2018 02:43:32 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.43.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:43:31 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com, Rafal Kozik Date: Thu, 7 Jun 2018 11:42:58 +0200 Message-Id: <20180607094322.14312-3-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 03/27] net/ena: remove support of legacy LLQ X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rafal Kozik The legacy LLQ should no longer be supported by the drivers, as this API is deprecated. Because of that, it was removed from the driver. Signed-off-by: Rafal Kozik Signed-off-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 29 +++++------------------------ 1 file changed, 5 insertions(+), 24 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 71d6838a0..f4b137a83 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -1327,16 +1327,11 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) adapter->regs = pci_dev->mem_resource[ENA_REGS_BAR].addr; adapter->dev_mem_base = pci_dev->mem_resource[ENA_MEM_BAR].addr; - /* Present ENA_MEM_BAR indicates available LLQ mode. - * Use corresponding policy - */ - if (adapter->dev_mem_base) - ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_DEV; - else if (adapter->regs) - ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; - else + if (!adapter->regs) { PMD_INIT_LOG(CRIT, "Failed to access registers BAR(%d)", ENA_REGS_BAR); + return -ENXIO; + } ena_dev->reg_bar = adapter->regs; ena_dev->dmadev = adapter->pdev; @@ -1353,22 +1348,8 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) return -1; } - if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) { - if (get_feat_ctx.max_queues.max_llq_num == 0) { - PMD_INIT_LOG(ERR, - "Trying to use LLQ but llq_num is 0.\n" - "Fall back into regular queues."); - ena_dev->tx_mem_queue_type = - ENA_ADMIN_PLACEMENT_POLICY_HOST; - adapter->num_queues = - get_feat_ctx.max_queues.max_sq_num; - } else { - adapter->num_queues = - get_feat_ctx.max_queues.max_llq_num; - } - } else { - adapter->num_queues = get_feat_ctx.max_queues.max_sq_num; - } + ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; + adapter->num_queues = get_feat_ctx.max_queues.max_sq_num; queue_size = ena_calc_queue_size(ena_dev, &get_feat_ctx); if ((queue_size <= 0) || (adapter->num_queues <= 0)) From patchwork Thu Jun 7 09:42:59 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40717 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9E4812BF3; Thu, 7 Jun 2018 11:43:40 +0200 (CEST) Received: from mail-lf0-f66.google.com (mail-lf0-f66.google.com [209.85.215.66]) by dpdk.org (Postfix) with ESMTP id 49F6C1B161 for ; Thu, 7 Jun 2018 11:43:34 +0200 (CEST) Received: by mail-lf0-f66.google.com with SMTP id n3-v6so13623512lfe.12 for ; Thu, 07 Jun 2018 02:43:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=YnZe8HvdnzZz2TLBXsClAS62JzeHlVKJu00VBtvxyY0=; b=y2tz1r2hn0gssUVEJWcpTH2jUE9ZHRgk1J1p1Q5LcHni9uRzVnKhgjCAz5OIptaQ5q 8ktPt+QCQhuW+yGxgD3iyzKU2zEtdv9GhcXZVVpdWhJskNoU//Ky8twoI2pwL0LLwtlC /yoY711zKuzeCDrvE8O+0c3TlK0ftybsXvrH31FNE9YdHxDSHPdJLSMTbxSUj9XOoM6s GsoDBWsIwzdaP2eruX4rwWkW11+wtgSC+apvN/QnIAPANSJ1uXyQXB17cgoOo92zOaGv lI9hcj2fUNrVLIXKIw+OMXalQsbfpSwf7oYx3iY/Dc83l/J3IWKBykAd2rMW8W8XuoQ0 HUvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=YnZe8HvdnzZz2TLBXsClAS62JzeHlVKJu00VBtvxyY0=; b=rKgJ9JRjqIVZ8N0KejSEfHV4g/Uv0gDY4hWpRlsH61ufQFVTDwXNJ76nadqkyBJKGv x1LIismNAUXmLrDrsUyMhqGhyFk3BF1etrv7zp2caeF3sVCnnaswr6bRevQjjNIa+eQp kP5Gu0WercRtfT0m7FKDuY2HF56eUjkMBcQaInmkoggEa6WXSf+C+Mvb958Q7OpRR6F9 MkkGj/miUyISD1sha9jlWDM2gFVt8dZhEsM1ynMJlLAWu6WjVp4SDeeOzBP38EXDGv6U bIQO7CG0TfjPELvN8VIGfUJz/BhObHTcJL3f/ygNbsfUOpoVzXMjUovn73bb1uv/JyPm Z0Rw== X-Gm-Message-State: APt69E3qoFdZ9LL3/WMdNjGsW+NpA1nAi7Mj55MQFX67mvo0nHvHlv01 0kGNjU+oEaPLO4lrTHN63AtWnQ== X-Google-Smtp-Source: ADUXVKJLNmZIWoGyzNZ3GhYs7R4w9nPcseRBxuo+XbzQApHzN7o4w9VaF2Nddz8ZV0FCbOjHw8zctg== X-Received: by 2002:a2e:8350:: with SMTP id l16-v6mr943207ljh.7.1528364613936; Thu, 07 Jun 2018 02:43:33 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.43.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:43:33 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com Date: Thu, 7 Jun 2018 11:42:59 +0200 Message-Id: <20180607094322.14312-4-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 04/27] net/ena: add interrupt handler for admin queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The ENA device is able to send MSI-X when it will complete an command when polling mode is deactivated. Further, the same interrupt handler will be used for the AENQ handling - services of the ENA device, that allows to implement watchdog or lsc handler. Signed-off-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index f4b137a83..515d17fee 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -246,6 +246,7 @@ static int ena_rss_reta_query(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size); static int ena_get_sset_count(struct rte_eth_dev *dev, int sset); +static void ena_interrupt_handler_rte(void *cb_arg); static const struct eth_dev_ops ena_dev_ops = { .dev_configure = ena_dev_configure, @@ -459,9 +460,16 @@ static void ena_close(struct rte_eth_dev *dev) { struct ena_adapter *adapter = (struct ena_adapter *)(dev->data->dev_private); + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; adapter->state = ENA_ADAPTER_STATE_STOPPED; + rte_intr_disable(intr_handle); + rte_intr_callback_unregister(intr_handle, + ena_interrupt_handler_rte, + adapter); + ena_rx_queue_release_all(dev); ena_tx_queue_release_all(dev); } @@ -908,6 +916,8 @@ static int ena_start(struct rte_eth_dev *dev) { struct ena_adapter *adapter = (struct ena_adapter *)(dev->data->dev_private); + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; int rc = 0; if (!(adapter->state == ENA_ADAPTER_STATE_CONFIG || @@ -937,6 +947,11 @@ static int ena_start(struct rte_eth_dev *dev) ena_stats_restart(dev); + rte_intr_callback_register(intr_handle, + ena_interrupt_handler_rte, + adapter); + rte_intr_enable(intr_handle); + adapter->state = ENA_ADAPTER_STATE_RUNNING; return 0; @@ -1291,6 +1306,14 @@ static int ena_device_init(struct ena_com_dev *ena_dev, return rc; } +static void ena_interrupt_handler_rte(__rte_unused void *cb_arg) +{ + struct ena_adapter *adapter = (struct ena_adapter *)cb_arg; + struct ena_com_dev *ena_dev = &adapter->ena_dev; + + ena_com_admin_q_comp_intr_handler(ena_dev); +} + static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) { struct rte_pci_device *pci_dev; From patchwork Thu Jun 7 09:43:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40718 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4852F1B1B6; Thu, 7 Jun 2018 11:43:42 +0200 (CEST) Received: from mail-lf0-f67.google.com (mail-lf0-f67.google.com [209.85.215.67]) by dpdk.org (Postfix) with ESMTP id E8A761B16F for ; Thu, 7 Jun 2018 11:43:35 +0200 (CEST) Received: by mail-lf0-f67.google.com with SMTP id i83-v6so13654280lfh.5 for ; Thu, 07 Jun 2018 02:43:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=kSfrk1ixmmU0rmSZiEsXPVSn0SlN73jJpENv2FPlmN4=; b=higXkTF0ndmccJWwG4oN00S0Uev6O8nLwHiQBzXo7M5r1duHGCkvdojk5dyQHmVbb+ Q0qYRssqv6sLYdpZNP6q8pXN2hJVgj7GVsmZUR0Nt2Koo7VvN7QJJl377CNdFQ2ufj0u aZbGB4MA9zCRrmaZOnSK90hNFaZB3iLaFCBimfJiedV9HjRkr+eQuNmMYcC5XG4bx8O8 Q/cUGsrcLOBveRy9+GfEMtH+0u1BnjAj4wto7wX8WUBoAzRQIcxXCqGkxeXGmQuv/mtB wHenFEIpsfiVflnEl7KmxgGyzNhBVzWCEl58PwuvsTE1fVff92fhkezyutTGx+1Y6ClU rTSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=kSfrk1ixmmU0rmSZiEsXPVSn0SlN73jJpENv2FPlmN4=; b=a/aYrste0k9BfOashVdxELexX7ljbXy2dQGYpnF5A15HaRPTq0IYjTQ24fJ74g+6u8 TIdgMMqdDX6DvpAry8L5O3cMci30Z0vwATWlzoruzcPmHmyjQNqOEWrVQC3prWfF0Mor broGcFRcfqUulRxiH1a1YytVRIYp6EKFYkzXSKfNWaL1lVASXBJpL0BiNxY4D0HpyT4s fpD9qvRqwFwU/+3KBBllc1MP5752UVUXgnHB5jX4pUFm9zOMgrReQHmq9hgaWYAN+ZtV 2I9u8WKAuSir9VuAc1eUv8iFHKLfEdr0INUnoX7pJxO+0mftqOKPHn+oeQcBw5vOUsBR rJDA== X-Gm-Message-State: APt69E2acu4DvEmTh0UKTAmnNdTvxz8pLFB61O5NNB90dgFBAI6oWep8 ke9baRmmJt9daXeEQyt92GJwEQ== X-Google-Smtp-Source: ADUXVKKvzzow+juENix90m6frkbrXcr3k9+c3GCK1ygQVjh7xW+gc6RjCXT1iqPOzFeFQtV8ppQHJg== X-Received: by 2002:a2e:575c:: with SMTP id r28-v6mr941335ljd.51.1528364615350; Thu, 07 Jun 2018 02:43:35 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.43.34 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:43:34 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin , Anatoly Burakov Cc: dev@dpdk.org, matua@amazon.com Date: Thu, 7 Jun 2018 11:43:00 +0200 Message-Id: <20180607094322.14312-5-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 05/27] net/ena: add stop and uninit routines X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Lack of uninit routine could lead to memory leak. Stop was added to fulfill allowed PMD operations. Checks for the PMD states in the start and configure routine were removed, as the upper layer is already checking for them. The interrupt setup was moved from start to init function. Signed-off-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 95 +++++++++++++++++++++++++------------------- drivers/net/ena/ena_ethdev.h | 3 +- 2 files changed, 56 insertions(+), 42 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 515d17fee..4ca19c57c 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -223,6 +223,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count); static void ena_init_rings(struct ena_adapter *adapter); static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu); static int ena_start(struct rte_eth_dev *dev); +static void ena_stop(struct rte_eth_dev *dev); static void ena_close(struct rte_eth_dev *dev); static int ena_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); static void ena_rx_queue_release_all(struct rte_eth_dev *dev); @@ -254,6 +255,7 @@ static const struct eth_dev_ops ena_dev_ops = { .rx_queue_setup = ena_rx_queue_setup, .tx_queue_setup = ena_tx_queue_setup, .dev_start = ena_start, + .dev_stop = ena_stop, .link_update = ena_link_update, .stats_get = ena_stats_get, .mtu_set = ena_mtu_set, @@ -460,15 +462,9 @@ static void ena_close(struct rte_eth_dev *dev) { struct ena_adapter *adapter = (struct ena_adapter *)(dev->data->dev_private); - struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; - - adapter->state = ENA_ADAPTER_STATE_STOPPED; - rte_intr_disable(intr_handle); - rte_intr_callback_unregister(intr_handle, - ena_interrupt_handler_rte, - adapter); + ena_stop(dev); + adapter->state = ENA_ADAPTER_STATE_CLOSED; ena_rx_queue_release_all(dev); ena_tx_queue_release_all(dev); @@ -916,16 +912,8 @@ static int ena_start(struct rte_eth_dev *dev) { struct ena_adapter *adapter = (struct ena_adapter *)(dev->data->dev_private); - struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; int rc = 0; - if (!(adapter->state == ENA_ADAPTER_STATE_CONFIG || - adapter->state == ENA_ADAPTER_STATE_STOPPED)) { - PMD_INIT_LOG(ERR, "API violation"); - return -1; - } - rc = ena_check_valid_conf(adapter); if (rc) return rc; @@ -947,16 +935,19 @@ static int ena_start(struct rte_eth_dev *dev) ena_stats_restart(dev); - rte_intr_callback_register(intr_handle, - ena_interrupt_handler_rte, - adapter); - rte_intr_enable(intr_handle); - adapter->state = ENA_ADAPTER_STATE_RUNNING; return 0; } +static void ena_stop(struct rte_eth_dev *dev) +{ + struct ena_adapter *adapter = + (struct ena_adapter *)(dev->data->dev_private); + + adapter->state = ENA_ADAPTER_STATE_STOPPED; +} + static int ena_queue_restart(struct ena_ring *ring) { int rc, bufs_num; @@ -1317,6 +1308,7 @@ static void ena_interrupt_handler_rte(__rte_unused void *cb_arg) static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) { struct rte_pci_device *pci_dev; + struct rte_intr_handle *intr_handle; struct ena_adapter *adapter = (struct ena_adapter *)(eth_dev->data->dev_private); struct ena_com_dev *ena_dev = &adapter->ena_dev; @@ -1347,6 +1339,8 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) pci_dev->addr.devid, pci_dev->addr.function); + intr_handle = &pci_dev->intr_handle; + adapter->regs = pci_dev->mem_resource[ENA_REGS_BAR].addr; adapter->dev_mem_base = pci_dev->mem_resource[ENA_MEM_BAR].addr; @@ -1406,36 +1400,55 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) return -ENOMEM; } + rte_intr_callback_register(intr_handle, + ena_interrupt_handler_rte, + adapter); + rte_intr_enable(intr_handle); + ena_com_set_admin_polling_mode(ena_dev, false); + adapters_found++; adapter->state = ENA_ADAPTER_STATE_INIT; return 0; } +static int eth_ena_dev_uninit(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct ena_adapter *adapter = + (struct ena_adapter *)(eth_dev->data->dev_private); + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return -EPERM; + + if (adapter->state != ENA_ADAPTER_STATE_CLOSED) + ena_close(eth_dev); + + eth_dev->dev_ops = NULL; + eth_dev->rx_pkt_burst = NULL; + eth_dev->tx_pkt_burst = NULL; + eth_dev->tx_pkt_prepare = NULL; + + rte_free(adapter->drv_stats); + adapter->drv_stats = NULL; + + rte_intr_disable(intr_handle); + rte_intr_callback_unregister(intr_handle, + ena_interrupt_handler_rte, + adapter); + + adapter->state = ENA_ADAPTER_STATE_FREE; + + return 0; +} + static int ena_dev_configure(struct rte_eth_dev *dev) { struct ena_adapter *adapter = (struct ena_adapter *)(dev->data->dev_private); - if (!(adapter->state == ENA_ADAPTER_STATE_INIT || - adapter->state == ENA_ADAPTER_STATE_STOPPED)) { - PMD_INIT_LOG(ERR, "Illegal adapter state: %d", - adapter->state); - return -1; - } - - switch (adapter->state) { - case ENA_ADAPTER_STATE_INIT: - case ENA_ADAPTER_STATE_STOPPED: - adapter->state = ENA_ADAPTER_STATE_CONFIG; - break; - case ENA_ADAPTER_STATE_CONFIG: - RTE_LOG(WARNING, PMD, - "Ivalid driver state while trying to configure device\n"); - break; - default: - break; - } + adapter->state = ENA_ADAPTER_STATE_CONFIG; adapter->tx_selected_offloads = dev->data->dev_conf.txmode.offloads; adapter->rx_selected_offloads = dev->data->dev_conf.rxmode.offloads; @@ -1838,7 +1851,7 @@ static int eth_ena_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, static int eth_ena_pci_remove(struct rte_pci_device *pci_dev) { - return rte_eth_dev_pci_generic_remove(pci_dev, NULL); + return rte_eth_dev_pci_generic_remove(pci_dev, eth_ena_dev_uninit); } static struct rte_pci_driver rte_ena_pmd = { diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 394d05e02..79fb14aeb 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -97,9 +97,10 @@ struct ena_ring { enum ena_adapter_state { ENA_ADAPTER_STATE_FREE = 0, ENA_ADAPTER_STATE_INIT = 1, - ENA_ADAPTER_STATE_RUNNING = 2, + ENA_ADAPTER_STATE_RUNNING = 2, ENA_ADAPTER_STATE_STOPPED = 3, ENA_ADAPTER_STATE_CONFIG = 4, + ENA_ADAPTER_STATE_CLOSED = 5, }; struct ena_driver_stats { From patchwork Thu Jun 7 09:43:01 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40719 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BACEC1B1C1; Thu, 7 Jun 2018 11:43:43 +0200 (CEST) Received: from mail-lf0-f68.google.com (mail-lf0-f68.google.com [209.85.215.68]) by dpdk.org (Postfix) with ESMTP id 1614A1B16F for ; Thu, 7 Jun 2018 11:43:37 +0200 (CEST) Received: by mail-lf0-f68.google.com with SMTP id i83-v6so13654362lfh.5 for ; Thu, 07 Jun 2018 02:43:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=NOScEJGxKLYUxfk7aUNOJRNkEQsBOHoRyMP8mqg1xEI=; b=M/kU0DoIb+5VEnko+BhPepCQ/hb5RdnQG/A8agLYT8Ja7zza+mfYwLH6zECPlzw01h INtxxdmRrvgFeO1MWcafMfCniyRYDf3AWTZHdgrpyeYPJcE4j7+YXNaCH8rGZElZbo3c xYBE9FFnQ5uXNLM8FJ+c5jzBPFJTnVtIulAcBm0ciRdem59K463+D8kVKuU//02ruZ1O jNvwtmZ4MqmJN2Y6Mfz+NkYdB2EInIY2IdKENYpk2a6KbMKkqDTOhJLk7VYt3WBSOfJ9 hpxgQof6RIRX2530eGvDjW8KLeT/doNg96FFSA1yhinowPsUN2ATLPriYIY8I2a9HWch eIeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=NOScEJGxKLYUxfk7aUNOJRNkEQsBOHoRyMP8mqg1xEI=; b=NOeJtifllhmlb+hAqy2tEKvWuYQNs59oq3KZ2HuSafQv4LN77OuNaxc0rETOw3183a aZQr2f7AO9SGXC1yKdRGBFUOfEWGj1EUQWYsnImKIjflvfN7va90pTFBbhVCdR3KM12v atAVxg4KrXqosneucC58T8DI+QnuJUa5dia6u/e4Y8nAJ0OB+wfgYVBXJmvPZSs8r/NC TvGqhZAjYrkMaSFUo7ERtQ1EcmFyY4gsarAK8J6bYkHAxj6+OOVTgFXt4DBSmbyO5Uhl SjQr55+UH5Fy6WKqyqum0WSOndoSAckgSnQlfTmC1E3dQwUEdIOXni4jHFAzh95O7k0R rQjg== X-Gm-Message-State: APt69E2ben7xh4przfaT9moIujGJa/hD9TovKg0P7tQVkWwp5cy0YX8U VY8qUuScDCmm0x34tMAa/molHw== X-Google-Smtp-Source: ADUXVKJJ0ADHZtM4d8eMsAEkOtX159DLkDtGBBYQMkqRZmB0yoFdni7QGLAlU6/4byw+TCKvqXgWwQ== X-Received: by 2002:a2e:8257:: with SMTP id j23-v6mr941595ljh.1.1528364616694; Thu, 07 Jun 2018 02:43:36 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.43.35 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:43:35 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com Date: Thu, 7 Jun 2018 11:43:01 +0200 Message-Id: <20180607094322.14312-6-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 06/27] net/ena: add LSC intr support and AENQ handling X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" To make the LSC interrupt working, the AENQ must be configured properly in the ENA device. The AENQ interrupt is common for all maintance interrupts - the proper handler is then executed depending on the received descriptor. Signed-off-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 52 ++++++++++++++++++++++++++++++++++++++------ drivers/net/ena/ena_ethdev.h | 2 ++ 2 files changed, 47 insertions(+), 7 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 4ca19c57c..a344b2573 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -201,7 +201,7 @@ static const struct rte_pci_id pci_id_ena_map[] = { { .device_id = 0 }, }; -static struct ena_aenq_handlers empty_aenq_handlers; +static struct ena_aenq_handlers aenq_handlers; static int ena_device_init(struct ena_com_dev *ena_dev, struct ena_com_dev_get_features_ctx *get_feat_ctx); @@ -732,8 +732,11 @@ static int ena_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) { struct rte_eth_link *link = &dev->data->dev_link; + struct ena_adapter *adapter; + + adapter = (struct ena_adapter *)(dev->data->dev_private); - link->link_status = ETH_LINK_UP; + link->link_status = adapter->link_status ? ETH_LINK_UP : ETH_LINK_DOWN; link->link_speed = ETH_SPEED_NUM_10G; link->link_duplex = ETH_LINK_FULL_DUPLEX; @@ -1228,6 +1231,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) static int ena_device_init(struct ena_com_dev *ena_dev, struct ena_com_dev_get_features_ctx *get_feat_ctx) { + uint32_t aenq_groups; int rc; bool readless_supported; @@ -1263,7 +1267,7 @@ static int ena_device_init(struct ena_com_dev *ena_dev, ena_dev->dma_addr_bits = ena_com_get_dma_width(ena_dev); /* ENA device administration layer init */ - rc = ena_com_admin_init(ena_dev, &empty_aenq_handlers, true); + rc = ena_com_admin_init(ena_dev, &aenq_handlers, true); if (rc) { RTE_LOG(ERR, PMD, "cannot initialize ena admin queue with device\n"); @@ -1286,6 +1290,15 @@ static int ena_device_init(struct ena_com_dev *ena_dev, goto err_admin_init; } + aenq_groups = BIT(ENA_ADMIN_LINK_CHANGE); + + aenq_groups &= get_feat_ctx->aenq.supported_groups; + rc = ena_com_set_aenq_config(ena_dev, aenq_groups); + if (rc) { + RTE_LOG(ERR, PMD, "Cannot configure aenq groups rc: %d\n", rc); + goto err_admin_init; + } + return 0; err_admin_init: @@ -1297,12 +1310,14 @@ static int ena_device_init(struct ena_com_dev *ena_dev, return rc; } -static void ena_interrupt_handler_rte(__rte_unused void *cb_arg) +static void ena_interrupt_handler_rte(void *cb_arg) { struct ena_adapter *adapter = (struct ena_adapter *)cb_arg; struct ena_com_dev *ena_dev = &adapter->ena_dev; ena_com_admin_q_comp_intr_handler(ena_dev); + if (adapter->state != ENA_ADAPTER_STATE_CLOSED) + ena_com_aenq_intr_handler(ena_dev, adapter); } static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) @@ -1405,6 +1420,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) adapter); rte_intr_enable(intr_handle); ena_com_set_admin_polling_mode(ena_dev, false); + ena_com_admin_aenq_enable(ena_dev); adapters_found++; adapter->state = ENA_ADAPTER_STATE_INIT; @@ -1842,6 +1858,9 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, return sent_idx; } +/********************************************************************* + * PMD configuration + *********************************************************************/ static int eth_ena_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev) { @@ -1856,7 +1875,7 @@ static int eth_ena_pci_remove(struct rte_pci_device *pci_dev) static struct rte_pci_driver rte_ena_pmd = { .id_table = pci_id_ena_map, - .drv_flags = RTE_PCI_DRV_NEED_MAPPING, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC, .probe = eth_ena_pci_probe, .remove = eth_ena_pci_remove, }; @@ -1880,6 +1899,25 @@ ena_init_log(void) /****************************************************************************** ******************************** AENQ Handlers ******************************* *****************************************************************************/ +static void ena_update_on_link_change(void *adapter_data, + struct ena_admin_aenq_entry *aenq_e) +{ + struct rte_eth_dev *eth_dev; + struct ena_adapter *adapter; + struct ena_admin_aenq_link_change_desc *aenq_link_desc; + uint32_t status; + + adapter = (struct ena_adapter *)adapter_data; + aenq_link_desc = (struct ena_admin_aenq_link_change_desc *)aenq_e; + eth_dev = adapter->rte_dev; + + status = get_ena_admin_aenq_link_change_desc_link_status(aenq_link_desc); + adapter->link_status = status; + + ena_link_update(eth_dev, 0); + _rte_eth_dev_callback_process(eth_dev, RTE_ETH_EVENT_INTR_LSC, NULL); +} + /** * This handler will called for unknown event group or unimplemented handlers **/ @@ -1889,9 +1927,9 @@ static void unimplemented_aenq_handler(__rte_unused void *data, // Unimplemented handler } -static struct ena_aenq_handlers empty_aenq_handlers = { +static struct ena_aenq_handlers aenq_handlers = { .handlers = { - [ENA_ADMIN_LINK_CHANGE] = unimplemented_aenq_handler, + [ENA_ADMIN_LINK_CHANGE] = ena_update_on_link_change, [ENA_ADMIN_NOTIFICATION] = unimplemented_aenq_handler, [ENA_ADMIN_KEEP_ALIVE] = unimplemented_aenq_handler }, diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 79fb14aeb..16172a54a 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -181,6 +181,8 @@ struct ena_adapter { uint64_t tx_selected_offloads; uint64_t rx_supported_offloads; uint64_t rx_selected_offloads; + + bool link_status; }; #endif /* _ENA_ETHDEV_H_ */ From patchwork Thu Jun 7 09:43:02 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40720 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 018991B1C6; Thu, 7 Jun 2018 11:43:45 +0200 (CEST) Received: from mail-lf0-f66.google.com (mail-lf0-f66.google.com [209.85.215.66]) by dpdk.org (Postfix) with ESMTP id 5588F2D13 for ; Thu, 7 Jun 2018 11:43:38 +0200 (CEST) Received: by mail-lf0-f66.google.com with SMTP id n15-v6so13629877lfn.10 for ; Thu, 07 Jun 2018 02:43:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=8mtqtKQGkcLqm9I63xmJC5ipA5m++GQxFaD5s5+Q5+c=; b=JXVlfTsdksTK9Fc3vTqoN7TtUpQ/tXAjysMZ+hQKGbWbNfVasaRI95VQOJWnyHIumz AuHmMoWGZjBB44Z3cuyNKs47UmlXT0k8O6vF1Le9hqHq0J8wLH1rGZww11C6E0Smy8xz CQiXdICeMrAoBDw+AgmoV1nJJl+YLbifvrSsHIDL193WmRnLbl8SCMsLUgpdC0klrMNP +vXbXvsV0Ta49jWVWcBbYsh8FYU0AiGFoG4OLB+Fkohr113J8D2VA1Rhw66z2EzpVGY2 aZ/iMRv32dQBiCq6aYo+UWcwL8bTVSm05VEZ3TOB6BZqUfs4qNSM8yQAyRI7RC9wtebx wXgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=8mtqtKQGkcLqm9I63xmJC5ipA5m++GQxFaD5s5+Q5+c=; b=RamPMbN9jYX3xCPG46jrvZv6+2+T4p/mep5ikcrFnvr6zy9TkvnsS2M5p+NIxCeUiy G+O0KocAqGi8sR7jYVMainSPZoLPP7iAOvPq2SRCIaafoNbCa9Eyd+JYBm6UMPC+5xX0 QYbEYDle6BUZuoXXSEPHHCJO+yWdG1Jsv6DfXyrsBS5jju5HE202KKq9uZDRS5R8kwqd YQAiNxT1HCh7/fI1cVK2ecwe3BDY8D6G5DumthCY5Y3o5TjpQHWrH9/cqJ69DHyVXczV /ob3be15sSMrmHoRGtpGMamWezKzlzlY2JozF0ekucAJCLwMMnBkKtrAvzleYlA2Wcr1 i40Q== X-Gm-Message-State: APt69E14XodFVMc07bIqC7BMcg+9ryDYYB12ToDhH6h9Hz0aBDQN1jdC fx/YvbSz67sJrygJHYna1SwYHg== X-Google-Smtp-Source: ADUXVKJNbiVUC/HCLF5/6fAnJDSNBb75tQCZuvVjIRhnaB8U21fvYSb+/cFuOTCLKv2Jl0x0xF4cyw== X-Received: by 2002:a2e:5c89:: with SMTP id q131-v6mr923865ljb.77.1528364617950; Thu, 07 Jun 2018 02:43:37 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.43.36 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:43:37 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com, Rafal Kozik Date: Thu, 7 Jun 2018 11:43:02 +0200 Message-Id: <20180607094322.14312-7-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 07/27] net/ena: handle ENA notification X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rafal Kozik When ENA notifications are provided ena_notification handler is called. It checks if received value is not corrupted and if necessary it reports proper warnings. Data received from NIC is parsed in ena_update_hints. Fields for storing those information was added to ena_adapter structure. ENA notification are enabled by setting ENA_ADMIN_NOTIFICATION flag in aenq_groups. Signed-off-by: Rafal Kozik Acked-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 41 +++++++++++++++++++++++++++++++++++++++-- 1 file changed, 39 insertions(+), 2 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index a344b2573..1b8fc0f42 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -1290,7 +1290,8 @@ static int ena_device_init(struct ena_com_dev *ena_dev, goto err_admin_init; } - aenq_groups = BIT(ENA_ADMIN_LINK_CHANGE); + aenq_groups = BIT(ENA_ADMIN_LINK_CHANGE) | + BIT(ENA_ADMIN_NOTIFICATION); aenq_groups &= get_feat_ctx->aenq.supported_groups; rc = ena_com_set_aenq_config(ena_dev, aenq_groups); @@ -1724,6 +1725,19 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, return i; } +static void ena_update_hints(struct ena_adapter *adapter, + struct ena_admin_ena_hw_hints *hints) +{ + if (hints->admin_completion_tx_timeout) + adapter->ena_dev.admin_queue.completion_timeout = + hints->admin_completion_tx_timeout * 1000; + + if (hints->mmio_read_timeout) + /* convert to usec */ + adapter->ena_dev.mmio_read.reg_read_to = + hints->mmio_read_timeout * 1000; +} + static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { @@ -1918,6 +1932,29 @@ static void ena_update_on_link_change(void *adapter_data, _rte_eth_dev_callback_process(eth_dev, RTE_ETH_EVENT_INTR_LSC, NULL); } +static void ena_notification(void *data, + struct ena_admin_aenq_entry *aenq_e) +{ + struct ena_adapter *adapter = (struct ena_adapter *)data; + struct ena_admin_ena_hw_hints *hints; + + if (aenq_e->aenq_common_desc.group != ENA_ADMIN_NOTIFICATION) + RTE_LOG(WARNING, PMD, "Invalid group(%x) expected %x\n", + aenq_e->aenq_common_desc.group, + ENA_ADMIN_NOTIFICATION); + + switch (aenq_e->aenq_common_desc.syndrom) { + case ENA_ADMIN_UPDATE_HINTS: + hints = (struct ena_admin_ena_hw_hints *) + (&aenq_e->inline_data_w4); + ena_update_hints(adapter, hints); + break; + default: + RTE_LOG(ERR, PMD, "Invalid aenq notification link state %d\n", + aenq_e->aenq_common_desc.syndrom); + } +} + /** * This handler will called for unknown event group or unimplemented handlers **/ @@ -1930,7 +1967,7 @@ static void unimplemented_aenq_handler(__rte_unused void *data, static struct ena_aenq_handlers aenq_handlers = { .handlers = { [ENA_ADMIN_LINK_CHANGE] = ena_update_on_link_change, - [ENA_ADMIN_NOTIFICATION] = unimplemented_aenq_handler, + [ENA_ADMIN_NOTIFICATION] = ena_notification, [ENA_ADMIN_KEEP_ALIVE] = unimplemented_aenq_handler }, .unimplemented_handler = unimplemented_aenq_handler From patchwork Thu Jun 7 09:43:03 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40721 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7AF751B1D7; Thu, 7 Jun 2018 11:43:46 +0200 (CEST) Received: from mail-lf0-f66.google.com (mail-lf0-f66.google.com [209.85.215.66]) by dpdk.org (Postfix) with ESMTP id 666AE2D13 for ; Thu, 7 Jun 2018 11:43:39 +0200 (CEST) Received: by mail-lf0-f66.google.com with SMTP id v135-v6so13648418lfa.9 for ; Thu, 07 Jun 2018 02:43:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=eXiOad6h7zR8a6/2uKwn4P4ivXff1ZoL9yjq4Vuv89g=; b=dopFZBER/SFsQu4hQtziOlMbvukH5ellwacrcVF2tOkoljs7QuNk94CnGOBf4LFwmo BzHn+RLkIFsXd4ScLURLg/pkbDk1wHwPFHuZSJdHe6BDjQgMjBrdBVTtXkyLRQ+Fdoix c280JRA7oSRcWZ95/tPN07jGi993ImHi1O2Kz7hEw9y+qdDYGeQtZCosSTeYQ7gURIin RDecO/DO8IHYJ+Ndpj1RqzcZNYZWZQS5gwSSIUnfSMG+qtugFU1fqO0iT5qq9T7jnKYi u63/p31lBoYVns7j93WUcg0ftPGvWG97hBLEhyUUfTrKpKcOoxlumLTubi/6ZsP1BsJd FRDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=eXiOad6h7zR8a6/2uKwn4P4ivXff1ZoL9yjq4Vuv89g=; b=kzGg5V7fOI7LTEQmTxyuPa/taeo3012z5DKrCj3SsMKnrdI7CdiBmywFXpEavJlF+v 2IcG5dsZIsunbyo5lcabH7o/2dpkSEXdE4uGJQwhEpDNC+JXFx0voUajilCgUGJ6pcM6 jCz3nIiWVCrMhsmZAUs+fzllixM+Tpu0lDJM3rQR5owiMpb5jROqVHRWB5Cycz2ewm5a Fv7eGVHEaFomzfE/Z9GlZaViSrK/6kSDSNCUs9PVHchgeGLo9y0O0gwFf/UGX3G4C0Is XJxHFM5QxKNBPv359mPzzobv0zAVfrNv57DiYKp5thbTclUlaKQOUWiMhVz072gO+oUN QXqA== X-Gm-Message-State: APt69E0z/JRU1yjkroSIWX0Gp4AxnGuReEQv/YjEBSynh8q1k5FXltqe 586TeEEvdhxRt/c/p8bXJc3Ucg== X-Google-Smtp-Source: ADUXVKKJNzUoW//RPhRHQrfHXSII7fbYNtRifmhMlZhn7YSLCqo7KI9P2HlCpEFdafEWZvlh0ushRw== X-Received: by 2002:a2e:44c6:: with SMTP id b67-v6mr961291ljf.120.1528364619067; Thu, 07 Jun 2018 02:43:39 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.43.38 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:43:38 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com Date: Thu, 7 Jun 2018 11:43:03 +0200 Message-Id: <20180607094322.14312-8-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 08/27] net/ena: restart only initialized queues instead of all X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" There is no need to check for restart all queues. It is sufficient to check only previously initialized queues. Signed-off-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 1b8fc0f42..58cf8a971 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -749,13 +749,18 @@ static int ena_queue_restart_all(struct rte_eth_dev *dev, struct ena_adapter *adapter = (struct ena_adapter *)(dev->data->dev_private); struct ena_ring *queues = NULL; + int nb_queues; int i = 0; int rc = 0; - queues = (ring_type == ENA_RING_TYPE_RX) ? - adapter->rx_ring : adapter->tx_ring; - - for (i = 0; i < adapter->num_queues; i++) { + if (ring_type == ENA_RING_TYPE_RX) { + queues = adapter->rx_ring; + nb_queues = dev->data->nb_rx_queues; + } else { + queues = adapter->tx_ring; + nb_queues = dev->data->nb_tx_queues; + } + for (i = 0; i < nb_queues; i++) { if (queues[i].configured) { if (ring_type == ENA_RING_TYPE_RX) { ena_assert_msg( From patchwork Thu Jun 7 09:43:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40722 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A4F6D1B1DF; Thu, 7 Jun 2018 11:43:47 +0200 (CEST) Received: from mail-lf0-f67.google.com (mail-lf0-f67.google.com [209.85.215.67]) by dpdk.org (Postfix) with ESMTP id AD7181B171 for ; Thu, 7 Jun 2018 11:43:40 +0200 (CEST) Received: by mail-lf0-f67.google.com with SMTP id n3-v6so13624005lfe.12 for ; Thu, 07 Jun 2018 02:43:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=yOhkPgeuxFr3Izg9Mc0fDQZggJFJFcgN58Ij9AGZ1Qg=; b=RG7B0DqMA8OFROnXM1gtI/Nmvd8QpEWSLCg7Apm+2j3mCe4mkp6FXtuUxqtw9UYQwM wu9HyrDFf3aK5mcDyIuf+5We5vdvXmNJgHqwT+NwrjGzex/V1vbxxOA/WbS0e8qQjTb6 9Ai5r+No2oL3uCFQSxwFVnpq1am0jkQL9WNmVxPlyC4I+w6ueeVwXhDhzrS9nYI/u3wv sVBjOZyENK/X9Kr2VfNvsvXvIGGWUjgnInjPtH2HL6BfiR777Se7NSPSBkEWqgvgyhSC HXkTRhcMG7GNOpV9PzWwa+4ThiCpGtT/T0rGmjildym/3DhNdO/RSJS9cNn/GckHUGzq sAuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=yOhkPgeuxFr3Izg9Mc0fDQZggJFJFcgN58Ij9AGZ1Qg=; b=Uue3us7su1rFU1OORd0+SWiB+0gd+9aimv2b4jlxmqjCdC/I7kYQ8T1yh6wT7/tn03 o1E9fd3S6bETLILk7gQhCQGdevGThIcbDSi3ig82xITyqFXY0X44LvOBUd08DdNZrt+7 3T8ctSAHMOK/W80H05NKfJOVnQrDRtLy6OY89dJv9MGJXDdO8HAbx6gWYIzXdDmJ8Sb7 JUgbclvQ6PCCogR3B0bi6Qu4tQXX0P//7ZSRbv5+QeRayHlTdp5tX3FDziu5QW43X6ZW 7HVsUCa6Qht63rA8i93ssjDYELNw7ZmqOoEQoH2MdoP12C80nnj+npIUICVyDitKgKBR pboA== X-Gm-Message-State: APt69E06ROlk+v1Y0qLlUog+MElj5144uC0GAGEMzdy22IYzGpdCg6PS 2+8DAIbdXEwurdlrmDO4qk2PNQ== X-Google-Smtp-Source: ADUXVKIeWLrOJZ4405U5tKz9xeiMZ8dP6DWRI/sUDhOBm6WMaXywVOvtqVmbWcTY8+0aKYlUY/ZUTQ== X-Received: by 2002:a2e:944d:: with SMTP id o13-v6mr999401ljh.65.1528364620363; Thu, 07 Jun 2018 02:43:40 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.43.39 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:43:39 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com Date: Thu, 7 Jun 2018 11:43:04 +0200 Message-Id: <20180607094322.14312-9-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 09/27] net/ena: add reset routine X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Reset routine can be used by the DPDK application to reset the device in case of receiving RTE_ETH_EVENT_INTR_RESET from the PMD. The reset event is not triggered by the driver, yet. It will be added in next commits to enable error recovery in case of misfunctioning of the device. Signed-off-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 64 +++++++++++++++++++++++++++++++++++++++++++- drivers/net/ena/ena_ethdev.h | 2 ++ 2 files changed, 65 insertions(+), 1 deletion(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 58cf8a971..4fae4fd66 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -225,6 +225,7 @@ static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu); static int ena_start(struct rte_eth_dev *dev); static void ena_stop(struct rte_eth_dev *dev); static void ena_close(struct rte_eth_dev *dev); +static int ena_dev_reset(struct rte_eth_dev *dev); static int ena_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); static void ena_rx_queue_release_all(struct rte_eth_dev *dev); static void ena_tx_queue_release_all(struct rte_eth_dev *dev); @@ -262,6 +263,7 @@ static const struct eth_dev_ops ena_dev_ops = { .rx_queue_release = ena_rx_queue_release, .tx_queue_release = ena_tx_queue_release, .dev_close = ena_close, + .dev_reset = ena_dev_reset, .reta_update = ena_rss_reta_update, .reta_query = ena_rss_reta_query, }; @@ -470,6 +472,63 @@ static void ena_close(struct rte_eth_dev *dev) ena_tx_queue_release_all(dev); } +static int +ena_dev_reset(struct rte_eth_dev *dev) +{ + struct rte_mempool *mb_pool_rx[ENA_MAX_NUM_QUEUES]; + struct rte_eth_dev *eth_dev; + struct rte_pci_device *pci_dev; + struct rte_intr_handle *intr_handle; + struct ena_com_dev *ena_dev; + struct ena_com_dev_get_features_ctx get_feat_ctx; + struct ena_adapter *adapter; + int nb_queues; + int rc, i; + + adapter = (struct ena_adapter *)(dev->data->dev_private); + ena_dev = &adapter->ena_dev; + eth_dev = adapter->rte_dev; + pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + intr_handle = &pci_dev->intr_handle; + nb_queues = eth_dev->data->nb_rx_queues; + + ena_com_set_admin_running_state(ena_dev, false); + + ena_com_dev_reset(ena_dev, adapter->reset_reason); + + for (i = 0; i < nb_queues; i++) + mb_pool_rx[i] = adapter->rx_ring[i].mb_pool; + + ena_rx_queue_release_all(eth_dev); + ena_tx_queue_release_all(eth_dev); + + rte_intr_disable(intr_handle); + + ena_com_abort_admin_commands(ena_dev); + ena_com_wait_for_abort_completion(ena_dev); + ena_com_admin_destroy(ena_dev); + ena_com_mmio_reg_read_request_destroy(ena_dev); + + rc = ena_device_init(ena_dev, &get_feat_ctx); + if (rc) { + PMD_INIT_LOG(CRIT, "Cannot initialize device\n"); + return rc; + } + + rte_intr_enable(intr_handle); + ena_com_set_admin_polling_mode(ena_dev, false); + ena_com_admin_aenq_enable(ena_dev); + + for (i = 0; i < nb_queues; ++i) + ena_rx_queue_setup(eth_dev, i, adapter->rx_ring_size, 0, NULL, + mb_pool_rx[i]); + + for (i = 0; i < nb_queues; ++i) + ena_tx_queue_setup(eth_dev, i, adapter->tx_ring_size, 0, NULL); + + return 0; +} + static int ena_rss_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) @@ -1074,7 +1133,10 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, for (i = 0; i < txq->ring_size; i++) txq->empty_tx_reqs[i] = i; - txq->offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads; + if (tx_conf != NULL) { + txq->offloads = + tx_conf->offloads | dev->data->dev_conf.txmode.offloads; + } /* Store pointer to this queue in upper layer */ txq->configured = 1; diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 16172a54a..79e9e655d 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -183,6 +183,8 @@ struct ena_adapter { uint64_t rx_selected_offloads; bool link_status; + + enum ena_regs_reset_reason_types reset_reason; }; #endif /* _ENA_ETHDEV_H_ */ From patchwork Thu Jun 7 09:43:05 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40723 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C11051B1EE; Thu, 7 Jun 2018 11:43:48 +0200 (CEST) Received: from mail-lf0-f42.google.com (mail-lf0-f42.google.com [209.85.215.42]) by dpdk.org (Postfix) with ESMTP id 3E1041B1AF for ; Thu, 7 Jun 2018 11:43:42 +0200 (CEST) Received: by mail-lf0-f42.google.com with SMTP id y20-v6so13686361lfy.0 for ; Thu, 07 Jun 2018 02:43:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=xECSz7/I7S17LsHaG9dxyZD5BBGfcwkW8ZONGHOJ4Lc=; b=R+mNK6Y1MTZCCK22R6flV3CKuoEV3IjeKb4luq6wz53d8i+2Z5VSmb7yFk9shJAcuw DAHXoRfsNW5NgocphOjsBmtgcqRYjqw/0ScCRX/WnuOcZfBSyBMXuBukbrpwOlIf2vFi B9UUWBjXSZMkF/mH0NWzQSNSC9Bsi9t1pZ5UFkN6qAOYjX5vQIkvu3i272raluikHuNC ihl+nEJlUfJa6HJvo5o4wVZo0/1J5jzlrQgU4nEI1NLyvTgvOHmKfFUPqqNbze1xtsLY a1gNMNFqhuIOgemLzZIk0IHtak8xLZtU5OCyWsJWOez3u/AE2QzXm+mX0uCxPap5MP+W d8gQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xECSz7/I7S17LsHaG9dxyZD5BBGfcwkW8ZONGHOJ4Lc=; b=jWqJtdmhCUWvGE83kpfvjMjXj4muLYd7dr5jF7+dO2xPTILMgoBWIOWm5FOZVhjRvD u+QO2/X4kbjMB8U8Y9lf0mNOeGIQLsVIfGi7Zi/JVpTDWxrRl3YfmHkX0nA6YOOIuGiM x3fibRy77aRc54Dohr2AjSD8R2uJHTHQWldtchs3RQwZD/m2Zovyo/cW0+2gA8DUgp7D +zv4Q85SHFl0uZWSkTpY42go+v78aVLt5uS16kd8JkX6B4iNKtWTrLmQXL3SC6sbuJzL nY7Fog6lpF5H4XPsrVY0ynJv29LxNyf2sXirVR/1rf+88VbiquJ1ujvJQqlrPgQtEpeV oQoQ== X-Gm-Message-State: APt69E14/TKTVV2aQDE+EP3z7PNUvCj+47Li+Tb8mNjyT2pXdtUvK9pW IdeogBM8XnTXK6vE9jMXBqoDAg== X-Google-Smtp-Source: ADUXVKJet52EWZW2U9PlxSqbfjelwHjyDjpUSCQ8E6/f7LwfVHLLlbQ/AQSeF15BAmtcrwVct9zdGQ== X-Received: by 2002:a2e:8549:: with SMTP id u9-v6mr970202ljj.10.1528364621860; Thu, 07 Jun 2018 02:43:41 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.43.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:43:40 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin , Thomas Monjalon Cc: dev@dpdk.org, matua@amazon.com Date: Thu, 7 Jun 2018 11:43:05 +0200 Message-Id: <20180607094322.14312-10-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 10/27] net/ena: add lrte_timer dependency for linking X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" ENA PMD is required to use librte_timer. The appropriate depndency must be added ifin case the DPDK will be built as shared library. Signed-off-by: Michal Krawczyk --- drivers/net/ena/Makefile | 1 + mk/rte.app.mk | 1 + 2 files changed, 2 insertions(+) diff --git a/drivers/net/ena/Makefile b/drivers/net/ena/Makefile index 43339f3b9..ff9ce315b 100644 --- a/drivers/net/ena/Makefile +++ b/drivers/net/ena/Makefile @@ -58,5 +58,6 @@ CFLAGS += $(INCLUDES) LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs LDLIBS += -lrte_bus_pci +LDLIBS += -lrte_timer include $(RTE_SDK)/mk/rte.lib.mk diff --git a/mk/rte.app.mk b/mk/rte.app.mk index 1e32c83e7..c70bc254e 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -60,6 +60,7 @@ endif _LDLIBS-y += --whole-archive +_LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE) += -lrte_cfgfile _LDLIBS-$(CONFIG_RTE_LIBRTE_HASH) += -lrte_hash _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMBER) += -lrte_member From patchwork Thu Jun 7 09:43:06 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40724 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1B84D1B213; Thu, 7 Jun 2018 11:43:50 +0200 (CEST) Received: from mail-lf0-f67.google.com (mail-lf0-f67.google.com [209.85.215.67]) by dpdk.org (Postfix) with ESMTP id 1AD061B029 for ; Thu, 7 Jun 2018 11:43:44 +0200 (CEST) Received: by mail-lf0-f67.google.com with SMTP id i15-v6so3337466lfc.2 for ; Thu, 07 Jun 2018 02:43:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=QxzJ3cUno/j6cdy8udgoRZkQ2l5qrVPgQ7FTYQw54WY=; b=vTzyAXKXpEKQz9OUi5EsxxpMsc8GDbaispjz4XwqNolPVqZa1hHzz+o0XFl2AKGGI0 x5AlTKX8ri5rsSGdsk0ypwLjyft/FESRnzYTR9bu6VVWt8Ckl7+MnXtVipn73TGg9Me+ RaSZQUAPsdzhg+uHY1LlVlNh8OMk1Evv0dIbsCe5J+M0CPLxFUaAUfvdp87o8xIW9Tct 5c11iew08ALNRkPr82IGsRmA+K1OgHOBbSk2M5Io1/y/VVk7sWVRC8UfZCip1AfzE5HO 4tYOKzEf5R2vAWJ3qerVOqe7BH+W7kZKilY4EtQmarn6gY5gKoC2EMvlXDPpsH1PqxLB miOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=QxzJ3cUno/j6cdy8udgoRZkQ2l5qrVPgQ7FTYQw54WY=; b=aWa73vQaZ2Mc+mjD3pSaEb9Xnu6yaux/DxTxbYPRFGF0KjxRVUSTCxPG1dYPOhVLgz tvMP/JhF60N2+iq+z3e6O/DojBhS3UZAczlauUxTqkVcA5H80rmz7oj1/hI9/6BI5kSN Pl6eFMBMGIrlXUBEYt/Ke4UpCcIq/1AH+b3FNs5o3DV0Jn63fajALU8MB5SOuYnqQYxs CaAnCQZCEG0aQkecVFVu+I6GxPqCA2rhMgFc7QNh95oegiyzv3NAlz3CrtF+gv+Ise70 qj5e641+s8QWGpZ0oqKkKmP76DPBiVqZqpn870RlJihRf7STs5+ceQuD5QkU06Seng3c sMYA== X-Gm-Message-State: APt69E3fhNqUstkVaKZtKXmU6pumnfwwb3Vp0/rFELUeA6HJyFKK9Mb3 V+rqBT7jPUDHAuCh9EwRTI6kjw== X-Google-Smtp-Source: ADUXVKJnOMPeJDH3Q5tZ7/2/ksu85dY3qzU9qk6GygzLgUERkUD9tTRM7QxvGvKPMAewaNMvBIcVLw== X-Received: by 2002:a2e:2bcb:: with SMTP id r72-v6mr998312ljr.133.1528364623734; Thu, 07 Jun 2018 02:43:43 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.43.41 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:43:42 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com Date: Thu, 7 Jun 2018 11:43:06 +0200 Message-Id: <20180607094322.14312-11-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 11/27] net/ena: add watchdog and keep alive AENQ handler X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Keep alive is executing AENQ interrupt periodically. It allows to check health of the device and trigger reset event if the device will stop responding. To check for the state of the device, the DPDK application must call rte_timer_manage(). Signed-off-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 58 ++++++++++++++++++++++++++++++++++++++++++-- drivers/net/ena/ena_ethdev.h | 9 +++++++ 2 files changed, 65 insertions(+), 2 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 4fae4fd66..796b6fc0a 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -249,6 +249,7 @@ static int ena_rss_reta_query(struct rte_eth_dev *dev, uint16_t reta_size); static int ena_get_sset_count(struct rte_eth_dev *dev, int sset); static void ena_interrupt_handler_rte(void *cb_arg); +static void ena_timer_wd_callback(struct rte_timer *timer, void *arg); static const struct eth_dev_ops ena_dev_ops = { .dev_configure = ena_dev_configure, @@ -979,6 +980,7 @@ static int ena_start(struct rte_eth_dev *dev) { struct ena_adapter *adapter = (struct ena_adapter *)(dev->data->dev_private); + uint64_t ticks; int rc = 0; rc = ena_check_valid_conf(adapter); @@ -1002,6 +1004,13 @@ static int ena_start(struct rte_eth_dev *dev) ena_stats_restart(dev); + adapter->timestamp_wd = rte_get_timer_cycles(); + adapter->keep_alive_timeout = ENA_DEVICE_KALIVE_TIMEOUT; + + ticks = rte_get_timer_hz(); + rte_timer_reset(&adapter->timer_wd, ticks, PERIODICAL, rte_lcore_id(), + ena_timer_wd_callback, adapter); + adapter->state = ENA_ADAPTER_STATE_RUNNING; return 0; @@ -1012,6 +1021,8 @@ static void ena_stop(struct rte_eth_dev *dev) struct ena_adapter *adapter = (struct ena_adapter *)(dev->data->dev_private); + rte_timer_stop_sync(&adapter->timer_wd); + adapter->state = ENA_ADAPTER_STATE_STOPPED; } @@ -1358,7 +1369,8 @@ static int ena_device_init(struct ena_com_dev *ena_dev, } aenq_groups = BIT(ENA_ADMIN_LINK_CHANGE) | - BIT(ENA_ADMIN_NOTIFICATION); + BIT(ENA_ADMIN_NOTIFICATION) | + BIT(ENA_ADMIN_KEEP_ALIVE); aenq_groups &= get_feat_ctx->aenq.supported_groups; rc = ena_com_set_aenq_config(ena_dev, aenq_groups); @@ -1388,6 +1400,26 @@ static void ena_interrupt_handler_rte(void *cb_arg) ena_com_aenq_intr_handler(ena_dev, adapter); } +static void ena_timer_wd_callback(__rte_unused struct rte_timer *timer, + void *arg) +{ + struct ena_adapter *adapter = (struct ena_adapter *)arg; + struct rte_eth_dev *dev = adapter->rte_dev; + + if (adapter->keep_alive_timeout == ENA_HW_HINTS_NO_TIMEOUT) + return; + + /* Within reasonable timing range no memory barriers are needed */ + if ((rte_get_timer_cycles() - adapter->timestamp_wd) >= + adapter->keep_alive_timeout) { + RTE_LOG(ERR, PMD, "The ENA device is not responding - " + "performing device reset..."); + adapter->reset_reason = ENA_REGS_RESET_KEEP_ALIVE_TO; + _rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_RESET, + NULL); + } +} + static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) { struct rte_pci_device *pci_dev; @@ -1490,6 +1522,10 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) ena_com_set_admin_polling_mode(ena_dev, false); ena_com_admin_aenq_enable(ena_dev); + if (adapters_found == 0) + rte_timer_subsystem_init(); + rte_timer_init(&adapter->timer_wd); + adapters_found++; adapter->state = ENA_ADAPTER_STATE_INIT; @@ -1803,6 +1839,16 @@ static void ena_update_hints(struct ena_adapter *adapter, /* convert to usec */ adapter->ena_dev.mmio_read.reg_read_to = hints->mmio_read_timeout * 1000; + + if (hints->driver_watchdog_timeout) { + if (hints->driver_watchdog_timeout == ENA_HW_HINTS_NO_TIMEOUT) + adapter->keep_alive_timeout = ENA_HW_HINTS_NO_TIMEOUT; + else + // Convert msecs to ticks + adapter->keep_alive_timeout = + (hints->driver_watchdog_timeout * + rte_get_timer_hz()) / 1000; + } } static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, @@ -2022,6 +2068,14 @@ static void ena_notification(void *data, } } +static void ena_keep_alive(void *adapter_data, + __rte_unused struct ena_admin_aenq_entry *aenq_e) +{ + struct ena_adapter *adapter = (struct ena_adapter *)adapter_data; + + adapter->timestamp_wd = rte_get_timer_cycles(); +} + /** * This handler will called for unknown event group or unimplemented handlers **/ @@ -2035,7 +2089,7 @@ static struct ena_aenq_handlers aenq_handlers = { .handlers = { [ENA_ADMIN_LINK_CHANGE] = ena_update_on_link_change, [ENA_ADMIN_NOTIFICATION] = ena_notification, - [ENA_ADMIN_KEEP_ALIVE] = unimplemented_aenq_handler + [ENA_ADMIN_KEEP_ALIVE] = ena_keep_alive }, .unimplemented_handler = unimplemented_aenq_handler }; diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 79e9e655d..b44cca23e 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -34,8 +34,10 @@ #ifndef _ENA_ETHDEV_H_ #define _ENA_ETHDEV_H_ +#include #include #include +#include #include "ena_com.h" @@ -50,6 +52,9 @@ #define ENA_MMIO_DISABLE_REG_READ BIT(0) +#define ENA_WD_TIMEOUT_SEC 3 +#define ENA_DEVICE_KALIVE_TIMEOUT (ENA_WD_TIMEOUT_SEC * rte_get_timer_hz()) + struct ena_adapter; enum ena_ring_type { @@ -185,6 +190,10 @@ struct ena_adapter { bool link_status; enum ena_regs_reset_reason_types reset_reason; + + struct rte_timer timer_wd; + uint64_t timestamp_wd; + uint64_t keep_alive_timeout; }; #endif /* _ENA_ETHDEV_H_ */ From patchwork Thu Jun 7 09:43:07 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40725 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5C1B81B219; Thu, 7 Jun 2018 11:43:51 +0200 (CEST) Received: from mail-lf0-f66.google.com (mail-lf0-f66.google.com [209.85.215.66]) by dpdk.org (Postfix) with ESMTP id C134A1B1DB for ; Thu, 7 Jun 2018 11:43:46 +0200 (CEST) Received: by mail-lf0-f66.google.com with SMTP id v135-v6so13648944lfa.9 for ; Thu, 07 Jun 2018 02:43:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=3Vitdk7ctM7aZRDSswk6hosriHw7EHcK/VUdY6MZGtM=; b=LPsWSDR3uZ6BVUSQBeXE+E+/KyCU5tiCT7GdcAVx/6LVgKDlCfhWaf/sU9iEXcjX04 3po4q4Be+T6GwUOZtUz//UBJ3MCTxnySKKquYLkCkw/GMlJI7bHv5/yi0g4sosWaCjnH WRIl+wKTzPO6En8f89pbwPTDurjQrmKl47ncFsdghkFj/2D2WCO382Zs1ZC+/D3eo2dk FKR4dkBMdi/DjTSxz1tq1d1atc9xNfmrQkPrqZvLDBrV9mkdM2raOF2aZqEyjqYJGJ8a /WsJ0fMJjCYDLyj+QhgBLedUUNH1lpYDycXFB41VOJuePYRy93Zfx8A4P41XqIx6Nov8 Teew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=3Vitdk7ctM7aZRDSswk6hosriHw7EHcK/VUdY6MZGtM=; b=AwxdQQnvNtnEw34MXyUReuFPSLposhZ6JAaLnVvJg6XN3j+83BT4PXUElqS/oPG9kz 8iKlWCCAObTuqFQdzxh/6Si0633Rh2kPD5rKBfdUy/fVmfD4hsur65iu6CyfxlqXlCNq Qhp1lN37q8k+pf1o2Tn5JuPsL2UNRg0iZC5oJJtQLr2LRrQdK1LgAtuJOPPNFS2xZmi3 VRqF9fY/e1TVk/qC3DGdmqS01dPNXjHWdNWNzyda+4UloXE2iFPNFZHGeyGygLWOZ6sz hf6UzUVg06v1wfzB6JE96yBBi2ms8MtbluPJCMwEY+peteDFCuQaw6v4KEr1deGlrtsw 5/CA== X-Gm-Message-State: APt69E3W+uRuBRysvOrX7TvkeCd6blut7FZDjTEYAk4rbjGvejjDpsFD CBM7ypeotK1I9jgg/cV12vn7TQ== X-Google-Smtp-Source: ADUXVKLjmky1PVbKxSTNgHjEi4r42i4Gp20yW9uH8dGf5Q2EYiL8gL69LZmHPQOFyxy2ihF0SXOsCg== X-Received: by 2002:a2e:401b:: with SMTP id n27-v6mr1008282lja.6.1528364626433; Thu, 07 Jun 2018 02:43:46 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.43.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:43:44 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com Date: Thu, 7 Jun 2018 11:43:07 +0200 Message-Id: <20180607094322.14312-12-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 12/27] net/ena: add checking for admin queue state X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The admin queue can stop responding or became inactive due to unexpected behaviour of the device. In that case, the whole device should be restarted. Signed-off-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 37 +++++++++++++++++++++++++++++-------- drivers/net/ena/ena_ethdev.h | 2 ++ 2 files changed, 31 insertions(+), 8 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 796b6fc0a..dc253c384 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -527,6 +527,8 @@ ena_dev_reset(struct rte_eth_dev *dev) for (i = 0; i < nb_queues; ++i) ena_tx_queue_setup(eth_dev, i, adapter->tx_ring_size, 0, NULL); + adapter->trigger_reset = false; + return 0; } @@ -1400,21 +1402,40 @@ static void ena_interrupt_handler_rte(void *cb_arg) ena_com_aenq_intr_handler(ena_dev, adapter); } +static void check_for_missing_keep_alive(struct ena_adapter *adapter) +{ + if (adapter->keep_alive_timeout == ENA_HW_HINTS_NO_TIMEOUT) + return; + + if (unlikely((rte_get_timer_cycles() - adapter->timestamp_wd) >= + adapter->keep_alive_timeout)) { + RTE_LOG(ERR, PMD, "Keep alive timeout\n"); + adapter->reset_reason = ENA_REGS_RESET_KEEP_ALIVE_TO; + adapter->trigger_reset = true; + } +} + +/* Check if admin queue is enabled */ +static void check_for_admin_com_state(struct ena_adapter *adapter) +{ + if (unlikely(!ena_com_get_admin_running_state(&adapter->ena_dev))) { + RTE_LOG(ERR, PMD, "ENA admin queue is not in running state!\n"); + adapter->reset_reason = ENA_REGS_RESET_ADMIN_TO; + adapter->trigger_reset = true; + } +} + static void ena_timer_wd_callback(__rte_unused struct rte_timer *timer, void *arg) { struct ena_adapter *adapter = (struct ena_adapter *)arg; struct rte_eth_dev *dev = adapter->rte_dev; - if (adapter->keep_alive_timeout == ENA_HW_HINTS_NO_TIMEOUT) - return; + check_for_missing_keep_alive(adapter); + check_for_admin_com_state(adapter); - /* Within reasonable timing range no memory barriers are needed */ - if ((rte_get_timer_cycles() - adapter->timestamp_wd) >= - adapter->keep_alive_timeout) { - RTE_LOG(ERR, PMD, "The ENA device is not responding - " - "performing device reset..."); - adapter->reset_reason = ENA_REGS_RESET_KEEP_ALIVE_TO; + if (unlikely(adapter->trigger_reset)) { + RTE_LOG(ERR, PMD, "Trigger reset is on\n"); _rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_RESET, NULL); } diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index b44cca23e..1f6a7062f 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -194,6 +194,8 @@ struct ena_adapter { struct rte_timer timer_wd; uint64_t timestamp_wd; uint64_t keep_alive_timeout; + + bool trigger_reset; }; #endif /* _ENA_ETHDEV_H_ */ From patchwork Thu Jun 7 09:43:08 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40726 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5E4241B3C6; Thu, 7 Jun 2018 11:43:53 +0200 (CEST) Received: from mail-lf0-f66.google.com (mail-lf0-f66.google.com [209.85.215.66]) by dpdk.org (Postfix) with ESMTP id 769171B20E for ; Thu, 7 Jun 2018 11:43:49 +0200 (CEST) Received: by mail-lf0-f66.google.com with SMTP id d24-v6so13633669lfa.8 for ; Thu, 07 Jun 2018 02:43:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=wk7JyfEThpLElCOf4/NtjEdXAxO31339W9r8YjTmMtw=; b=EwnAyNNiPsTlF/vJS3fhbly6nZ6Vv6aWyyKfvHwx8ERsSIVV0CD5+GsbASpEllWoYe pnKA51LnlaOlEySXd3lbLdSOT6i0FNW1nCDDg+HVBrpvNS1YXbspfCVHihrHeTks32Fq 8py2oNT1KkX1rUOO4V/V906VBTm0VjQmQdqvRlrb/x0kUavPAEW1dgIo/hxFoBU87rEU CJkHqfYUjmL2HvClmfwCtmpkDBWsFfla8fTUyHorn4H1RUL+8LCiIz0BR+lbENYAZfIn fahZHMYRWPlYVNaSqGuMymKE13Lx4hrFwvA0bpUYSGGlE2F23IAlFNbN/HwI9Z1oN28S ogeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=wk7JyfEThpLElCOf4/NtjEdXAxO31339W9r8YjTmMtw=; b=OmaugPEyhIsFHVrD8JRoNQhcCA4fGcZLv0vXKgu/itZveYFYLUPTI3AnnfZA6JQr3Q DYd1o4bGrkpXphDemAo5cr3EkYrpPnTofwsp7yfEoAXlyqGMXAPuQpvksAMGdK2BCgt1 xPENWCUstqmeG7RnPDuPkseiuoZBewZQuQqF6UgYIOiRcUsNTus3hI/HDtnpWKsCoMyu zmdCp1SRcx07qE5EUMrES7Rfu8Xhw8aAUMgT5B9OQsjGBTwJm+MFr8CwOSv4+QTCpeiJ W/RzK2NpUVoGXyAjcJnQCFH+MLvLjrRisZo73bNzCQavUOxr6VeQkaYhXV98utHsLhBy qy/g== X-Gm-Message-State: APt69E0RF5ksUkDyJUj3xsSS8rkPt58xMYOWF8W4IORLPegdGt0Nt63U gAl0gTfH0HHhXhUMIOXIOG5qpg== X-Google-Smtp-Source: ADUXVKI2VnoXY4MzTEqX5fsSJytnMKDBTSRnmavml9KlHt7OXODghRGVeBECAIhZVEZif1LCYpeINQ== X-Received: by 2002:a2e:5047:: with SMTP id v7-v6mr1033188ljd.122.1528364629144; Thu, 07 Jun 2018 02:43:49 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.43.46 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:43:47 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com, Rafal Kozik Date: Thu, 7 Jun 2018 11:43:08 +0200 Message-Id: <20180607094322.14312-13-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 13/27] net/ena: make watchdog configurable X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rafal Kozik Add variable wd_state to make driver functional without keep alive AENQ handler. The watchdog will be executed only if the aenq group has keep alive enabled. Signed-off-by: Rafal Kozik Acked-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 19 +++++++++++++++---- drivers/net/ena/ena_ethdev.h | 2 ++ 2 files changed, 17 insertions(+), 4 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index dc253c384..1a15c184a 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -204,7 +204,8 @@ static const struct rte_pci_id pci_id_ena_map[] = { static struct ena_aenq_handlers aenq_handlers; static int ena_device_init(struct ena_com_dev *ena_dev, - struct ena_com_dev_get_features_ctx *get_feat_ctx); + struct ena_com_dev_get_features_ctx *get_feat_ctx, + bool *wd_state); static int ena_dev_configure(struct rte_eth_dev *dev); static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); @@ -485,6 +486,7 @@ ena_dev_reset(struct rte_eth_dev *dev) struct ena_adapter *adapter; int nb_queues; int rc, i; + bool wd_state; adapter = (struct ena_adapter *)(dev->data->dev_private); ena_dev = &adapter->ena_dev; @@ -510,11 +512,12 @@ ena_dev_reset(struct rte_eth_dev *dev) ena_com_admin_destroy(ena_dev); ena_com_mmio_reg_read_request_destroy(ena_dev); - rc = ena_device_init(ena_dev, &get_feat_ctx); + rc = ena_device_init(ena_dev, &get_feat_ctx, &wd_state); if (rc) { PMD_INIT_LOG(CRIT, "Cannot initialize device\n"); return rc; } + adapter->wd_state = wd_state; rte_intr_enable(intr_handle); ena_com_set_admin_polling_mode(ena_dev, false); @@ -1309,7 +1312,8 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) } static int ena_device_init(struct ena_com_dev *ena_dev, - struct ena_com_dev_get_features_ctx *get_feat_ctx) + struct ena_com_dev_get_features_ctx *get_feat_ctx, + bool *wd_state) { uint32_t aenq_groups; int rc; @@ -1381,6 +1385,8 @@ static int ena_device_init(struct ena_com_dev *ena_dev, goto err_admin_init; } + *wd_state = !!(aenq_groups & BIT(ENA_ADMIN_KEEP_ALIVE)); + return 0; err_admin_init: @@ -1404,6 +1410,9 @@ static void ena_interrupt_handler_rte(void *cb_arg) static void check_for_missing_keep_alive(struct ena_adapter *adapter) { + if (!adapter->wd_state) + return; + if (adapter->keep_alive_timeout == ENA_HW_HINTS_NO_TIMEOUT) return; @@ -1452,6 +1461,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) int queue_size, rc; static int adapters_found; + bool wd_state; memset(adapter, 0, sizeof(struct ena_adapter)); ena_dev = &adapter->ena_dev; @@ -1495,11 +1505,12 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) adapter->id_number); /* device specific initialization routine */ - rc = ena_device_init(ena_dev, &get_feat_ctx); + rc = ena_device_init(ena_dev, &get_feat_ctx, &wd_state); if (rc) { PMD_INIT_LOG(CRIT, "Failed to init ENA device"); return -1; } + adapter->wd_state = wd_state; ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; adapter->num_queues = get_feat_ctx.max_queues.max_sq_num; diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 1f6a7062f..594e643e2 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -196,6 +196,8 @@ struct ena_adapter { uint64_t keep_alive_timeout; bool trigger_reset; + + bool wd_state; }; #endif /* _ENA_ETHDEV_H_ */ From patchwork Thu Jun 7 09:43:09 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40727 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id F31E51B3D1; Thu, 7 Jun 2018 11:43:54 +0200 (CEST) Received: from mail-lf0-f66.google.com (mail-lf0-f66.google.com [209.85.215.66]) by dpdk.org (Postfix) with ESMTP id 33FB81B1B5 for ; Thu, 7 Jun 2018 11:43:51 +0200 (CEST) Received: by mail-lf0-f66.google.com with SMTP id g21-v6so11669296lfb.4 for ; Thu, 07 Jun 2018 02:43:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=h4FDbEJZN8D4yMCkpiflSwbVdpTeH4Ekl9Sxdt3vgww=; b=hv8iOKzuCaLVf6h8Y4oZl4koeURpdQZY4Td0RqlYtsGd2IgN5WSGMjGEqCB0n4KMIK kFUFqYv+A2u+ML+oE7QYMhWURAPRFQicoOlLoBJJQDh+8Dh4K4EdDww3gCoVC47TSfbg O6CVVDQWwI/cBpW7gFDjdz15uWg5ZkPX16R1kHQS1E0mFxboLqK5Rb+VvDwzr0JkYFlD MJGUaa6pAW82e/L++IXO5a1J2QHgssrZtZWhvVjmCnBE9wvrjWv8pyjLpD2kwz7er87y 9mrYUzyeeKA3qj+mu+HhgQorBGe3jKJzR9WSsq6rSZ6MPmA8JItALs2oXO2Sz225ps+k Bh/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=h4FDbEJZN8D4yMCkpiflSwbVdpTeH4Ekl9Sxdt3vgww=; b=i0wd3lG8juNSfZg9I6d0/gdNHZFGOhHYFiEHgcm5gf/ZXn3fLWB13xFL7O4QZuZ+Gr eLb56vATiCGTNPX3g1YxcALg7wOW6R4MmnEITTbTleUbwe5bg5xkRI0VWYHPX3AVCXvI Uy6JICsoV1z+cxVJ6gaOa3gtMUvqopj8TGXPdE9i9bAFHfwk3/zfelTlTihQu+j8GXtt JVg9+KlZA8fTdqB1jC12xAmCRNaQnpiOciiAXHx7Rrt2VIM7FY2PgdENP16qL/Zqgkrm NV4CzzmeYVWnrIVI6/JQ9bXpsU4/c9Mnn9Nj7ERKsSAVDxz4gr4eki5QopaVIEp/pPfh 9ZRA== X-Gm-Message-State: APt69E0La1Z3RTpprnOTI2aTwEWgMoOLI53uw+qeozpUd/qmweMq8pQS KwFXdWE11VSXk3dhFaDTLsjj/g== X-Google-Smtp-Source: ADUXVKI9MLo3YLDMlnJVPmissf7EWQNo0RFmxiPtXNvEgrFYmDDV8Jzchj94juMdJdF6cTeTvCYEkA== X-Received: by 2002:a2e:9a44:: with SMTP id k4-v6mr961970ljj.17.1528364630867; Thu, 07 Jun 2018 02:43:50 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.43.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:43:49 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com Date: Thu, 7 Jun 2018 11:43:09 +0200 Message-Id: <20180607094322.14312-14-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 14/27] net/ena: add RX out of order completion X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This feature allows RX packets to be cleaned up out of order. Signed-off-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 48 ++++++++++++++++++++++++++++++++++++++++---- drivers/net/ena/ena_ethdev.h | 8 ++++++-- 2 files changed, 50 insertions(+), 6 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 1a15c184a..f0e95ef58 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -368,6 +368,19 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf, } } +static inline int validate_rx_req_id(struct ena_ring *rx_ring, uint16_t req_id) +{ + if (likely(req_id < rx_ring->ring_size)) + return 0; + + RTE_LOG(ERR, PMD, "Invalid rx req_id: %hu\n", req_id); + + rx_ring->adapter->reset_reason = ENA_REGS_RESET_INV_RX_REQ_ID; + rx_ring->adapter->trigger_reset = true; + + return -EFAULT; +} + static void ena_config_host_info(struct ena_com_dev *ena_dev) { struct ena_admin_host_info *host_info; @@ -724,6 +737,10 @@ static void ena_rx_queue_release(void *queue) rte_free(ring->rx_buffer_info); ring->rx_buffer_info = NULL; + if (ring->empty_rx_reqs) + rte_free(ring->empty_rx_reqs); + ring->empty_rx_reqs = NULL; + ring->configured = 0; RTE_LOG(NOTICE, PMD, "RX Queue %d:%d released\n", @@ -1176,7 +1193,7 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, (struct ena_adapter *)(dev->data->dev_private); struct ena_ring *rxq = NULL; uint16_t ena_qid = 0; - int rc = 0; + int i, rc = 0; struct ena_com_dev *ena_dev = &adapter->ena_dev; rxq = &adapter->rx_ring[queue_idx]; @@ -1242,6 +1259,19 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, return -ENOMEM; } + rxq->empty_rx_reqs = rte_zmalloc("rxq->empty_rx_reqs", + sizeof(uint16_t) * nb_desc, + RTE_CACHE_LINE_SIZE); + if (!rxq->empty_rx_reqs) { + RTE_LOG(ERR, PMD, "failed to alloc mem for empty rx reqs\n"); + rte_free(rxq->rx_buffer_info); + rxq->rx_buffer_info = NULL; + return -ENOMEM; + } + + for (i = 0; i < nb_desc; i++) + rxq->empty_tx_reqs[i] = i; + /* Store pointer to this queue in upper layer */ rxq->configured = 1; dev->data->rx_queues[queue_idx] = rxq; @@ -1256,7 +1286,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) uint16_t ring_size = rxq->ring_size; uint16_t ring_mask = ring_size - 1; uint16_t next_to_use = rxq->next_to_use; - uint16_t in_use; + uint16_t in_use, req_id; struct rte_mbuf **mbufs = &rxq->rx_buffer_info[0]; if (unlikely(!count)) @@ -1284,12 +1314,14 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) struct ena_com_buf ebuf; rte_prefetch0(mbufs[((next_to_use + 4) & ring_mask)]); + + req_id = rxq->empty_rx_reqs[next_to_use_masked]; /* prepare physical address for DMA transaction */ ebuf.paddr = mbuf->buf_iova + RTE_PKTMBUF_HEADROOM; ebuf.len = mbuf->buf_len - RTE_PKTMBUF_HEADROOM; /* pass resource to device */ rc = ena_com_add_single_rx_desc(rxq->ena_com_io_sq, - &ebuf, next_to_use_masked); + &ebuf, req_id); if (unlikely(rc)) { rte_mempool_put_bulk(rxq->mb_pool, (void **)(&mbuf), count - i); @@ -1710,6 +1742,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, unsigned int ring_mask = ring_size - 1; uint16_t next_to_clean = rx_ring->next_to_clean; uint16_t desc_in_use = 0; + uint16_t req_id; unsigned int recv_idx = 0; struct rte_mbuf *mbuf = NULL; struct rte_mbuf *mbuf_head = NULL; @@ -1750,7 +1783,12 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, break; while (segments < ena_rx_ctx.descs) { - mbuf = rx_buff_info[next_to_clean & ring_mask]; + req_id = ena_rx_ctx.ena_bufs[segments].req_id; + rc = validate_rx_req_id(rx_ring, req_id); + if (unlikely(rc)) + break; + + mbuf = rx_buff_info[req_id]; mbuf->data_len = ena_rx_ctx.ena_bufs[segments].len; mbuf->data_off = RTE_PKTMBUF_HEADROOM; mbuf->refcnt = 1; @@ -1767,6 +1805,8 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, mbuf_head->pkt_len += mbuf->data_len; mbuf_prev = mbuf; + rx_ring->empty_rx_reqs[next_to_clean & ring_mask] = + req_id; segments++; next_to_clean++; } diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 594e643e2..bba5ad53a 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -75,8 +75,12 @@ struct ena_ring { enum ena_ring_type type; enum ena_admin_placement_policy_type tx_mem_queue_type; - /* Holds the empty requests for TX OOO completions */ - uint16_t *empty_tx_reqs; + /* Holds the empty requests for TX/RX OOO completions */ + union { + uint16_t *empty_tx_reqs; + uint16_t *empty_rx_reqs; + }; + union { struct ena_tx_buffer *tx_buffer_info; /* contex of tx packet */ struct rte_mbuf **rx_buffer_info; /* contex of rx packet */ From patchwork Thu Jun 7 09:43:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40728 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E5AFA1B3D6; Thu, 7 Jun 2018 11:43:56 +0200 (CEST) Received: from mail-lf0-f49.google.com (mail-lf0-f49.google.com [209.85.215.49]) by dpdk.org (Postfix) with ESMTP id B997D1B29F for ; Thu, 7 Jun 2018 11:43:52 +0200 (CEST) Received: by mail-lf0-f49.google.com with SMTP id n15-v6so13630853lfn.10 for ; Thu, 07 Jun 2018 02:43:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=p8OtjOMsPxmK9PD1AELCqbAvCctoqiy8t0TFI8EKN78=; b=O1i1mzxX8HEhgo7UWRjo82SYg0qllirUbSlDxmztxybLSJ5rKb7ydktTpUc+0nZqEk ZigxJ2favNUBNapfJcQzjhMqQxb6BlZKeLTVpCsS4RmEfHoE2t31KSZcwHuj1VOzQOYb ee6De1Q7ZqnTxlqAQnZq4e4ekWLYP5sVHARFJJez3CROLUaq4qdrGda7x0seEljKn08g LySh0b7UNwo7RHLBmsbne/0AlYsxAj9WXK+f/Z9doOiBa0YsZbcC2r4EZoOW7GpEvTEu tloEKBohmkX5EGJXT90Os1O5S+ycOrLsZIU57gIq1oJWvjHZrHksvpSvr7WvYuEyY5kH kBrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=p8OtjOMsPxmK9PD1AELCqbAvCctoqiy8t0TFI8EKN78=; b=aHXWBk1RlIxJrrWunPlPTSPe04lS8UZzSEbCookdY1hUhIx0YplGFU7iO1zP9U6BJi Dox93xsnSbt7iEJ+jlhzFEogKmVkKgdHv59gJ/ZSGlg+SeQj3CfgwzyUh/5sX+0xYZ/T tuWEQ5to6RarFBSrMGqrqPAC5eKj0nSJIIfVSGvoaTP7hVI6ZQMuIHisb50KPJQOlFc2 nak/dwmHUNpJVrAV6iOVgVkqdkvIp2KlYm1+rbBgY0KpdM+5phTa8xq48zuoXaHeZXce Teu4GfE5F45awPDZ68Lw/UJjh26t6zq4Fro5BfxYXzBjMm0S/fuSV4J/XcBKeyZchIQX wExA== X-Gm-Message-State: APt69E2x5fMDrHuoHpxqZ0AXoETJct1ouUCLFJZDT2Emwovc2qUUqa42 s7WtrMYBD1ZzgSyDQmSLnHH9Kg== X-Google-Smtp-Source: ADUXVKII8OpXPu1XIG6F1P4nOP0D7XFqbHAfBGXdCeqoN5SDIhMzMILZq7dDkt9lgHetKhEFfo7+/g== X-Received: by 2002:a19:4e86:: with SMTP id u6-v6mr863099lfk.105.1528364632357; Thu, 07 Jun 2018 02:43:52 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.43.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:43:51 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com, Rafal Kozik Date: Thu, 7 Jun 2018 11:43:10 +0200 Message-Id: <20180607094322.14312-15-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 15/27] net/ena: linearize Tx mbuf X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rafal Kozik Function ena_check_and_linearize_mbuf check Tx mbuf for number of segments and linearize (defragment) it if necessary. It is called before sending each packet. Information about maximum number of segments is stored per each ring. Maximum number of segments supported by NIC is taken from ENA COM in ena_calc_queue_size function and stored in adapter structure. Signed-off-by: Rafal Kozik Acked-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 31 ++++++++++++++++++++++++++++++- drivers/net/ena/ena_ethdev.h | 2 ++ 2 files changed, 32 insertions(+), 1 deletion(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index f0e95ef58..cdefcd325 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -894,6 +894,7 @@ static int ena_check_valid_conf(struct ena_adapter *adapter) static int ena_calc_queue_size(struct ena_com_dev *ena_dev, + u16 *max_tx_sgl_size, struct ena_com_dev_get_features_ctx *get_feat_ctx) { uint32_t queue_size = ENA_DEFAULT_RING_SIZE; @@ -916,6 +917,9 @@ ena_calc_queue_size(struct ena_com_dev *ena_dev, return -EFAULT; } + *max_tx_sgl_size = RTE_MIN(ENA_PKT_MAX_BUFS, + get_feat_ctx->max_queues.max_packet_tx_descs); + return queue_size; } @@ -1491,6 +1495,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) struct ena_com_dev *ena_dev = &adapter->ena_dev; struct ena_com_dev_get_features_ctx get_feat_ctx; int queue_size, rc; + u16 tx_sgl_size = 0; static int adapters_found; bool wd_state; @@ -1547,13 +1552,15 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; adapter->num_queues = get_feat_ctx.max_queues.max_sq_num; - queue_size = ena_calc_queue_size(ena_dev, &get_feat_ctx); + queue_size = ena_calc_queue_size(ena_dev, &tx_sgl_size, &get_feat_ctx); if ((queue_size <= 0) || (adapter->num_queues <= 0)) return -EFAULT; adapter->tx_ring_size = queue_size; adapter->rx_ring_size = queue_size; + adapter->max_tx_sgl_size = tx_sgl_size; + /* prepare ring structures */ ena_init_rings(adapter); @@ -1652,6 +1659,7 @@ static void ena_init_rings(struct ena_adapter *adapter) ring->id = i; ring->tx_mem_queue_type = adapter->ena_dev.tx_mem_queue_type; ring->tx_max_header_size = adapter->ena_dev.tx_max_header_size; + ring->sgl_size = adapter->max_tx_sgl_size; } for (i = 0; i < adapter->num_queues; i++) { @@ -1923,6 +1931,23 @@ static void ena_update_hints(struct ena_adapter *adapter, } } +static int ena_check_and_linearize_mbuf(struct ena_ring *tx_ring, + struct rte_mbuf *mbuf) +{ + int num_segments, rc; + + num_segments = mbuf->nb_segs; + + if (likely(num_segments < tx_ring->sgl_size)) + return 0; + + rc = rte_pktmbuf_linearize(mbuf); + if (unlikely(rc)) + RTE_LOG(WARNING, PMD, "Mbuf linearize failed\n"); + + return rc; +} + static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { @@ -1953,6 +1978,10 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, for (sent_idx = 0; sent_idx < nb_pkts; sent_idx++) { mbuf = tx_pkts[sent_idx]; + rc = ena_check_and_linearize_mbuf(tx_ring, mbuf); + if (unlikely(rc)) + break; + req_id = tx_ring->empty_tx_reqs[next_to_use & ring_mask]; tx_info = &tx_ring->tx_buffer_info[req_id]; tx_info->mbuf = mbuf; diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index bba5ad53a..73c110ab9 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -101,6 +101,7 @@ struct ena_ring { int configured; struct ena_adapter *adapter; uint64_t offloads; + u16 sgl_size; } __rte_cache_aligned; enum ena_adapter_state { @@ -167,6 +168,7 @@ struct ena_adapter { /* TX */ struct ena_ring tx_ring[ENA_MAX_NUM_QUEUES] __rte_cache_aligned; int tx_ring_size; + u16 max_tx_sgl_size; /* RX */ struct ena_ring rx_ring[ENA_MAX_NUM_QUEUES] __rte_cache_aligned; From patchwork Thu Jun 7 09:43:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40729 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7830D1B3DF; Thu, 7 Jun 2018 11:43:58 +0200 (CEST) Received: from mail-lf0-f67.google.com (mail-lf0-f67.google.com [209.85.215.67]) by dpdk.org (Postfix) with ESMTP id 12A001B3D3 for ; Thu, 7 Jun 2018 11:43:55 +0200 (CEST) Received: by mail-lf0-f67.google.com with SMTP id i15-v6so3338248lfc.2 for ; Thu, 07 Jun 2018 02:43:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=YkMoZ3ehthKuZvPEQPQJfZ+55VUJdGkxC1FGdwTPkM4=; b=ci/G+2zv1/COEdv8Io6kr4g4OC8mcky9K7h9P58hFSJ7obFe+1zfaOysr+ApiDpFPj M0I2pqM5Rq22MEFBFFo2GTyM7VcigKDckBEH1v1C60yIN2U7FdpbcZiJ/DvZJoQ25NWG /pdJiRq9JoV7D+TVCTdTXFxSqAEDomdFC3bppvNS6BpDm64eGVbDLJN0AObUbAAo9g9F mUyfmOeotObrBs9Iz/OOA6adH8fTzwIR7TzoeRN2gTsZ6yF8z7V9Fv+2kA3HCS50fzHs W4P92Mxu3gu9erQqNsrx/8JilZdbQ8FxGguF19MQZO8uNeCOqHIm+K80fqXjW1wfwiT/ Z9TQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=YkMoZ3ehthKuZvPEQPQJfZ+55VUJdGkxC1FGdwTPkM4=; b=um4eeK83Z9DKay2jYWTtWuRD+CNJ3PBmfb+T4wkHP7poVlweTV4LFz+lI5liuD8E9R 0RjZ/QXZb3r+BRC+QO+ZK8mxeq8n39H+hK5WpHBaYz5+gpKXRjZVm62r4iJHHETHJ89m W39MQRTpz40jOtyu8HbvIItk+nsXWMwd/2f5ycgVNv0JyBmYYANekU/YUqvsh+So3+Wx rQhWeMwCLdFssaRpmY7ud3q7Q0/STyy8M6qilGUCVLqvcVK7zA3dHFsMNJXloT9+2zsT ZUreRvkvjwAzdfgnuKeer77JDesczGCJ/Ai5t0yQlX9igv6vEDLPEJIRVqLpnnvssRWN OzrA== X-Gm-Message-State: APt69E1NrfjXq0KoUJhmnLQLMwwHkA9Wa0U450Bn7KfGGXGejhkSgpXw TMy2Auy9RdI5DtD+vPEQ6m1FnA== X-Google-Smtp-Source: ADUXVKLg4N6nelH4CjYNV03K26u+fmt7YrQp+Byre9iFmciesPcguALYgqxVLR4WKxkCqO+ixugPng== X-Received: by 2002:a19:9358:: with SMTP id v85-v6mr817215lfd.83.1528364634758; Thu, 07 Jun 2018 02:43:54 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.43.52 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:43:53 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com, Rafal Kozik Date: Thu, 7 Jun 2018 11:43:11 +0200 Message-Id: <20180607094322.14312-16-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 16/27] net/ena: add info about max number of Tx/Rx descriptors X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rafal Kozik In function ena_infos_get driver provides information about minimal and maximal number of Rx and Tx descriptors. Signed-off-by: Rafal Kozik Acked-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index cdefcd325..a88de2eca 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -85,6 +85,9 @@ #define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) +#define ENA_MAX_RING_DESC ENA_DEFAULT_RING_SIZE +#define ENA_MIN_RING_DESC 128 + enum ethtool_stringset { ETH_SS_TEST = 0, ETH_SS_STATS, @@ -1740,6 +1743,16 @@ static void ena_infos_get(struct rte_eth_dev *dev, adapter->tx_supported_offloads = tx_feat; adapter->rx_supported_offloads = rx_feat; + + dev_info->rx_desc_lim.nb_max = ENA_MAX_RING_DESC; + dev_info->rx_desc_lim.nb_min = ENA_MIN_RING_DESC; + + dev_info->tx_desc_lim.nb_max = ENA_MAX_RING_DESC; + dev_info->tx_desc_lim.nb_min = ENA_MIN_RING_DESC; + dev_info->tx_desc_lim.nb_seg_max = RTE_MIN(ENA_PKT_MAX_BUFS, + feat.max_queues.max_packet_tx_descs); + dev_info->tx_desc_lim.nb_mtu_seg_max = RTE_MIN(ENA_PKT_MAX_BUFS, + feat.max_queues.max_packet_tx_descs); } static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, From patchwork Thu Jun 7 09:43:12 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40730 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 675171B3E9; Thu, 7 Jun 2018 11:44:01 +0200 (CEST) Received: from mail-lf0-f65.google.com (mail-lf0-f65.google.com [209.85.215.65]) by dpdk.org (Postfix) with ESMTP id 773E71B3E7 for ; Thu, 7 Jun 2018 11:43:59 +0200 (CEST) Received: by mail-lf0-f65.google.com with SMTP id d24-v6so13634370lfa.8 for ; Thu, 07 Jun 2018 02:43:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=FrOfw3+PIOrZ2QbcpvWeDhVbiIJpYPjUagrfULSx9lc=; b=yT4jvoyZz7iJLOsU5AAW/47iBedTh97bpanHlXUQuUHc9EBYJ/pPVXEdrrsRhSGoip Yw2oPq2Zjd96XpaUKm0Mq5rK5XxT54Blb33L899SUhraoDDtOprUIzRw3ZSv62ySA+9T mliyEr8TlcVUuDO+jFIHYAjN+uF4eKmfgFAv6w+a1eQv/kH05xGk44n6GKuYdF62knmo vn4+u6g2t5jsX+ZlIm4BDEmsdsg0KGnVpXnmHW/2Vo8q9wlH+o4F9Kz/SoVSVF5mRBCf RR4E0k2aC9NbogKVN9Urr6FyWHP09PFXWXl/+UXW3o3bSqn/GLQRLB3BIZgpYLiBCr6H a6aA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=FrOfw3+PIOrZ2QbcpvWeDhVbiIJpYPjUagrfULSx9lc=; b=KoL8OAaYyX44m8JyZOMJCt7PUDtc/FvLelRHJWMSN2dgDIlXbqUm6ZcaqN91qNOF5b z6D2FpcIltb6XzyiEM4iQsFJLU70iuoY5Z3y0Hm4SN6SEgomKxoAzs4LzQnl0QKifn9y wb2r6KxCCc6JOyDdp469QsNHfRMA92+fQYYFMgOkK8l6NqBpDw8X4EMj8885K+3m3m/t f8ZPpfHmKlGoyZCimvs6MQo+P+TX05v/Fv4/OZWeR9Vn34JANpWCuJcSylQAhecl+y5u YJRfnaP60sfYZ9MWKFKzWSd46cxxY82Ho3C96ws1QMIwF33rkrjerVNiAD6ImrkL7YRz ooGw== X-Gm-Message-State: APt69E2AaiUfmhkB1EuGtjybg7ytXSD2Kate+4623uUlH/WGLguXits9 /BkCkPn7yvLwqLVqjCTNHZBaNg== X-Google-Smtp-Source: ADUXVKKj5/M2GOQJlIHjEF5xbDuAA0cH27ntqcup8pT2/Pdq8vZdFzVel/jwO8/VUxcx0Y2y/WQ8iA== X-Received: by 2002:a2e:8350:: with SMTP id l16-v6mr944118ljh.7.1528364638333; Thu, 07 Jun 2018 02:43:58 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.43.54 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:43:55 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com, Rafal Kozik Date: Thu, 7 Jun 2018 11:43:12 +0200 Message-Id: <20180607094322.14312-17-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 17/27] net/ena: unimplemented handler error X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rafal Kozik Enable AENQ FATAL_ERROR and WARNING callbacks by setting flags in aenq_groups. They are handled by "unimplemented handler". If unimplemented handler is called, error is logged. Signed-off-by: Rafal Kozik Acked-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index a88de2eca..5f4956078 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -1415,7 +1415,9 @@ static int ena_device_init(struct ena_com_dev *ena_dev, aenq_groups = BIT(ENA_ADMIN_LINK_CHANGE) | BIT(ENA_ADMIN_NOTIFICATION) | - BIT(ENA_ADMIN_KEEP_ALIVE); + BIT(ENA_ADMIN_KEEP_ALIVE) | + BIT(ENA_ADMIN_FATAL_ERROR) | + BIT(ENA_ADMIN_WARNING); aenq_groups &= get_feat_ctx->aenq.supported_groups; rc = ena_com_set_aenq_config(ena_dev, aenq_groups); @@ -2196,7 +2198,8 @@ static void ena_keep_alive(void *adapter_data, static void unimplemented_aenq_handler(__rte_unused void *data, __rte_unused struct ena_admin_aenq_entry *aenq_e) { - // Unimplemented handler + RTE_LOG(ERR, PMD, "Unknown event was received or event with " + "unimplemented handler\n"); } static struct ena_aenq_handlers aenq_handlers = { From patchwork Thu Jun 7 09:43:13 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40731 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BA03D1B3F7; Thu, 7 Jun 2018 11:44:03 +0200 (CEST) Received: from mail-lf0-f68.google.com (mail-lf0-f68.google.com [209.85.215.68]) by dpdk.org (Postfix) with ESMTP id 084951B3E0 for ; Thu, 7 Jun 2018 11:44:01 +0200 (CEST) Received: by mail-lf0-f68.google.com with SMTP id g21-v6so11670096lfb.4 for ; Thu, 07 Jun 2018 02:44:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=hRYhkDqnGV9EvmbmqtYMwX6hQQUnLmXw9D1CfMZa4iI=; b=lOdLwemPFDcuklZbIykJROgFb4KZx3D8yVIuvY85ocDm1QSGdC15vLczICqiOR7BqS TQQ6rHK1v7OMc5nyBtpYaGXD3HZPOkYO43EtNRi+JRmGwifBrdcrLdNpleXv3y9blVIQ u3GukJJ8AliKC8hcaZb6LPvPBgWmGkV+Wo0BWCDDASTlgvHvT1FgXdpMMoLQI5lqhbyg njvHRro9QADddQotuOEVuavw72KrgsV1X3Ky680rtSrBLJhvtKneroUcHmk4GV2xclZs mtJTm4dlZGbTWUopOE2j/CGPxwz34gvUI7+tmnhHOWhOCzj6NNaa6chaR9J+l1J8Ikzm hz5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=hRYhkDqnGV9EvmbmqtYMwX6hQQUnLmXw9D1CfMZa4iI=; b=RdUcp5+PxeMU6agsFndZ3xlShaE19Pm7d4QD16xqR8n891CAdn4KkBgfFlGP8C0UPx rBa3qPcf+8i4kOGQaDiVFjdXimKNIPUw2m0tkwUm7r1yej3SE2wd58t3kM4K71EaSOm+ ZHoZabZ/ZO345jR7uwQdF25oix3fWaMAimFP8l1EIps5BdsXOwkNdqE8CZ0UwE3PSxph yBIN8wFRZz7czRvMEXX/4SfU9Du30EsZy4SLLGYLaOAK6PJzau2LeWG3bto7BmLORfIn 1hz+Z+kOZPFrE+JAPxkeNsj5/1vOUyqeYh80HY90kpeYbj1/RxevqRFl0DGil94JdWWi +Siw== X-Gm-Message-State: APt69E3Z/KNU2O4K+gXYsZzUFQCNHhijQJEGoG1Vf8JpA6ZHcEqyPecV +c/K+QQ9UZpvfQLDyxo/Qk27vw== X-Google-Smtp-Source: ADUXVKLtNY1BoCMp9xjglncnH7e2Bfr3TP/ZA/1MMv+N+IM8CVI2FZbJomMliBuLtsg3sMNsQQDR4g== X-Received: by 2002:a2e:878f:: with SMTP id n15-v6mr949746lji.69.1528364641437; Thu, 07 Jun 2018 02:44:01 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.43.58 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:43:59 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com, Rafal Kozik Date: Thu, 7 Jun 2018 11:43:13 +0200 Message-Id: <20180607094322.14312-18-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 18/27] net/ena: rework configuration of IO queue numbers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rafal Kozik Move configuration of IO queue numbers to separate function and take into consideration max number of IO completion queues. Signed-off-by: Rafal Kozik Acked-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 5f4956078..1f7463496 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -1491,6 +1491,24 @@ static void ena_timer_wd_callback(__rte_unused struct rte_timer *timer, } } +static int ena_calc_io_queue_num(__rte_unused struct ena_com_dev *ena_dev, + struct ena_com_dev_get_features_ctx *get_feat_ctx) +{ + int io_sq_num, io_cq_num, io_queue_num; + + io_sq_num = get_feat_ctx->max_queues.max_sq_num; + io_cq_num = get_feat_ctx->max_queues.max_cq_num; + + io_queue_num = RTE_MIN(io_sq_num, io_cq_num); + + if (unlikely(io_queue_num == 0)) { + RTE_LOG(ERR, PMD, "Number of IO queues should not be 0\n"); + return -EFAULT; + } + + return io_queue_num; +} + static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) { struct rte_pci_device *pci_dev; @@ -1555,7 +1573,8 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) adapter->wd_state = wd_state; ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; - adapter->num_queues = get_feat_ctx.max_queues.max_sq_num; + adapter->num_queues = ena_calc_io_queue_num(ena_dev, + &get_feat_ctx); queue_size = ena_calc_queue_size(ena_dev, &tx_sgl_size, &get_feat_ctx); if ((queue_size <= 0) || (adapter->num_queues <= 0)) From patchwork Thu Jun 7 09:43:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40732 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 167C21B404; Thu, 7 Jun 2018 11:44:06 +0200 (CEST) Received: from mail-lf0-f68.google.com (mail-lf0-f68.google.com [209.85.215.68]) by dpdk.org (Postfix) with ESMTP id AEDFA1B1C7 for ; Thu, 7 Jun 2018 11:44:04 +0200 (CEST) Received: by mail-lf0-f68.google.com with SMTP id g21-v6so11670304lfb.4 for ; Thu, 07 Jun 2018 02:44:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/kfiPClVLW8UBUzJEh1yi7ZvDw9p9uTLR9veQRPE9FM=; b=0mR8owbnv0zG05/wOJXo4b+z+uV+32o4x/vf4O0WSAvoSJLR8M3Z94JxYpkRiSJ20o 48zLkqPMfyrWP7naXDqeATAp61lXK0h3JxU1zzjC/BooerMH+fmKkN+EqR3r56bKnhZ4 JrrA6012Gzh4kPIplikA39bemI8FbNyF032dWeE4Ex1tqppVAqRWaEw2jJZkupP8b6DO BxSrQdTQrzSSN6NfEdxvHdoUfMS/A+2s97pz7snGySDxfn99cmc8tSzZz4jWrW2nxiu3 mtX2WdVlsI7FuTjy9I4JqCnSQj7htFeVaImXdNuih9YMV/6hTEPEraTMvqLq+1AgQGqR xnDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/kfiPClVLW8UBUzJEh1yi7ZvDw9p9uTLR9veQRPE9FM=; b=knWr1fwt6veZU8uW/EWM0EPslXl5Rkvme09iJAwkc0880fx1R88dKs1+OreGjIlzdJ 06gNVnxX1whCjUFr7lyHcyS0AQMnFtAxhhILw3KuggybK8Dury2zyKkna80mE27J384U tfEQIdq68r+0i6sXfkiAb4HgHg1chXqJeC4aWv0vUz4f5G+OP9dLiqhVWVsoeD8S22CQ q/yEknkr9DGXzw2UTdqsJ/PfD5Vpa0ajzjmVube++saITJp8IFC2j+T5jUczCs4+G5B+ 1EXcgQUbiOwv6dJLHEn4GnycqC9BwJ05AxopGWRlFcao01wjCJ1N7Yhfc5z1HqRQv/62 UxkA== X-Gm-Message-State: APt69E1QkLxeCx9nGRWk9Vq8YyjxuwpzPxKF8eq5WtF4ujl//xC0Glbh ASmRJvfX45KUkaKnrCy7C0xZcg== X-Google-Smtp-Source: ADUXVKKRiqalYSEzZWN4X2qGCcTu56yXQ5aSO0waX9Lqmj1H8hCLDuaBMBccH9tmL18o75X8yHdRBQ== X-Received: by 2002:a19:8f1d:: with SMTP id r29-v6mr866378lfd.88.1528364644358; Thu, 07 Jun 2018 02:44:04 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.44.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:44:02 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com, Rafal Kozik Date: Thu, 7 Jun 2018 11:43:14 +0200 Message-Id: <20180607094322.14312-19-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 19/27] net/ena: validate Tx req id X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rafal Kozik Validate Tx req id during clearing completed packets. If id is wrong, trigger NIC reset. Signed-off-by: Rafal Kozik Acked-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 1f7463496..5fbf0e5d5 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -384,6 +384,27 @@ static inline int validate_rx_req_id(struct ena_ring *rx_ring, uint16_t req_id) return -EFAULT; } +static int validate_tx_req_id(struct ena_ring *tx_ring, u16 req_id) +{ + struct ena_tx_buffer *tx_info = NULL; + + if (likely(req_id < tx_ring->ring_size)) { + tx_info = &tx_ring->tx_buffer_info[req_id]; + if (likely(tx_info->mbuf)) + return 0; + } + + if (tx_info) + RTE_LOG(ERR, PMD, "tx_info doesn't have valid mbuf\n"); + else + RTE_LOG(ERR, PMD, "Invalid req_id: %hu\n", req_id); + + /* Trigger device reset */ + tx_ring->adapter->reset_reason = ENA_REGS_RESET_INV_TX_REQ_ID; + tx_ring->adapter->trigger_reset = true; + return -EFAULT; +} + static void ena_config_host_info(struct ena_com_dev *ena_dev) { struct ena_admin_host_info *host_info; @@ -2093,6 +2114,10 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, /* Clear complete packets */ while (ena_com_tx_comp_req_id_get(tx_ring->ena_com_io_cq, &req_id) >= 0) { + rc = validate_tx_req_id(tx_ring, req_id); + if (rc) + break; + /* Get Tx info & store how many descs were processed */ tx_info = &tx_ring->tx_buffer_info[req_id]; total_tx_descs += tx_info->tx_descs; From patchwork Thu Jun 7 09:43:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40733 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DE3C61B412; Thu, 7 Jun 2018 11:44:09 +0200 (CEST) Received: from mail-lf0-f67.google.com (mail-lf0-f67.google.com [209.85.215.67]) by dpdk.org (Postfix) with ESMTP id C70611B412 for ; Thu, 7 Jun 2018 11:44:07 +0200 (CEST) Received: by mail-lf0-f67.google.com with SMTP id j13-v6so13639561lfb.13 for ; Thu, 07 Jun 2018 02:44:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=kdkCH7ytM2PLnyKJUz10VN66VjYw0xAVX91cVlg73gY=; b=ovbKL/Gmkryss7fVDzqZdTKrJWEZs5+S/kQOddpUPvIkFDwTzbRyNE2mqUS/WY8Ucz F/+r+V2wQfE6Ea22BO/SFOBCItrUYSuj8zCXb8yMbEvKPqc1AF4xZy3x3Xzmjb13zXCt OltMtFB5rjEm8a8RaVAyXJVpyhZ3EGQfC6WNlXOsAbxUq9qSs3PgnSo95s+pelEDTlS/ x2nyspjfStT0oCsiKegF4q9FIOwVIWeW9bsL1Q4HQOx8OJF28GqtIQineg6E5Weqc39I BLplTAfeSjGZu0ryVOt7rKVrk6Z+V5yOrE+VDobUVU/Tg207znjZ6tKgtkOtLD3imXoN bbOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=kdkCH7ytM2PLnyKJUz10VN66VjYw0xAVX91cVlg73gY=; b=bY8e4bszezO/Ws/bjAyfx7qJ5km+g1TKZCZbBMSc+dCZzyeawK9w3rkSTRtwSQRkYk 2SI99FS0QiVMg1ipr7SxqdSynQ2JYzZQiuEkeAmE7QdSciC71eBf2LG+92t/6ICXgxar njcVJzpk3hMdv+gvFkcM/xo2BqaFz17X45QwKMRQMs9fm/+phVHJRlY6zWbLYCJAOBir CnKfrw70RSAbeo+TAj/jJZ9S1eFucFM1dwPoXxv8CaSPjKuNSoNf2Np0TYkKjOr5drTU uE9di3XxVKiW66q6gOlXOKbG5a4OSziIUuNBpcAPo/n855ZbCYTN7e2LTRTho/CKCA9F 3/uw== X-Gm-Message-State: APt69E1K3A28woOBdVMszsa2HNo1KZMCtK8FLY40wqHwtuFWap5IGqul qO8wZvT4rwsp7mgD3OvUPiAnug== X-Google-Smtp-Source: ADUXVKK8l9rzB6tqdDg9fVCNAgma1ntonqunEb5jJb+x2vSEDSIkei6esOZEBys/3MGGMRkKE3dxng== X-Received: by 2002:a19:9390:: with SMTP id w16-v6mr870817lfk.70.1528364647448; Thu, 07 Jun 2018 02:44:07 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.44.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:44:05 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com, Rafal Kozik Date: Thu, 7 Jun 2018 11:43:15 +0200 Message-Id: <20180607094322.14312-20-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 20/27] net/ena: add (un)likely statements X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rafal Kozik Add likely and unlikely statements to increase performance. Signed-off-by: Rafal Kozik Acked-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 5fbf0e5d5..1425a0496 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -936,7 +936,7 @@ ena_calc_queue_size(struct ena_com_dev *ena_dev, if (!rte_is_power_of_2(queue_size)) queue_size = rte_align32pow2(queue_size >> 1); - if (queue_size == 0) { + if (unlikely(queue_size == 0)) { PMD_INIT_LOG(ERR, "Invalid queue size"); return -EFAULT; } @@ -1360,7 +1360,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) } /* When we submitted free recources to device... */ - if (i > 0) { + if (likely(i > 0)) { /* ...let HW know that it can fill buffers with data */ rte_wmb(); ena_com_write_sq_doorbell(rxq->ena_com_io_sq); @@ -1466,7 +1466,7 @@ static void ena_interrupt_handler_rte(void *cb_arg) struct ena_com_dev *ena_dev = &adapter->ena_dev; ena_com_admin_q_comp_intr_handler(ena_dev); - if (adapter->state != ENA_ADAPTER_STATE_CLOSED) + if (likely(adapter->state != ENA_ADAPTER_STATE_CLOSED)) ena_com_aenq_intr_handler(ena_dev, adapter); } @@ -1856,7 +1856,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, mbuf->data_off = RTE_PKTMBUF_HEADROOM; mbuf->refcnt = 1; mbuf->next = NULL; - if (segments == 0) { + if (unlikely(segments == 0)) { mbuf->nb_segs = ena_rx_ctx.descs; mbuf->port = rx_ring->port_id; mbuf->pkt_len = 0; From patchwork Thu Jun 7 09:43:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40734 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AE1B41B416; Thu, 7 Jun 2018 11:44:12 +0200 (CEST) Received: from mail-lf0-f66.google.com (mail-lf0-f66.google.com [209.85.215.66]) by dpdk.org (Postfix) with ESMTP id 0F2251B1D9 for ; Thu, 7 Jun 2018 11:44:11 +0200 (CEST) Received: by mail-lf0-f66.google.com with SMTP id t134-v6so13651894lff.6 for ; Thu, 07 Jun 2018 02:44:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=davHwj/T50tulN4nOCmqGEr4UAIjxABoGjTcRtrDbcs=; b=rrZRWj2vW1KndZczWreSJMEJzxsfnUIo8UL3bKbmUgIrsNyG1lpKNDFuM4H9d5CYuz yQTJcvy36Ajlig4+UPwszLHlV1mQ9eHxaX1Mr5nOu2nM505F3HtzsscXib6Iq6x88j+5 beFIJy6XzzUY5rSZApi4cTqvFDvskZwyonACQAjTp4Q50dzXJW/c4Bx/xZLhhaHsow9P kgH9y0Z5ykfYGhQRXlXdIn9MgGhL1+jYCXsE1mNgpG2ki84gkqJCGl0BL1G39llLVuyn PrVARsesz3EA86WpYhr0tHqlQHuqTf1hmEZgoE4cMAV9q6e4c4As6JmhcvCzZ1igaMe/ kQNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=davHwj/T50tulN4nOCmqGEr4UAIjxABoGjTcRtrDbcs=; b=msyXxHNxCPSrQJ8J6iLG4IPQOYcIeRpTlvkcXQZc8IRIxZkOtrxIaT3nUPGbSgPErs fGUqSBVEl8JKJY9xgGVGx+REIVT1sWC3Y/pEgnjJYeBeZQn6btdccWvYMPiTotoIUhLT sDEVR3jAbR2d4n0jgQiQzQ/EjpwJbQG614GwdDhIhYd7o6GjlZiO507Ih7e418PCd7Bz DEjhlteIOmDmqV8/gTaa5pyrRu35tcWRJqNGIGjOnwe4LOcO6OLegY9UwjrbZ+3jVHhN Jx/xTzK7kQUw8J+La5mXc9jiLCRJFVNEa+noteo2SBD8v/JfJJDuKuqCPXhn2r5jdfJW PF+A== X-Gm-Message-State: APt69E3aBMnU2Wp5BVaE0RyCF8Y7PVC3s19DJ+w8TowxwckTHg03Wwe3 Sfy2OivasIv/Y3k7ebygItluRg== X-Google-Smtp-Source: ADUXVKJaotvBwl++pOUwk93B3cNFIWpIStagaf7xTKUru/jNxCfvWcyv6lJAvqPybti6Gko97Ttgcw== X-Received: by 2002:a19:cc4b:: with SMTP id c72-v6mr886120lfg.30.1528364650534; Thu, 07 Jun 2018 02:44:10 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.44.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:44:08 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com, Rafal Kozik Date: Thu, 7 Jun 2018 11:43:16 +0200 Message-Id: <20180607094322.14312-21-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 21/27] net/ena: adjust error checking and cleaning X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rafal Kozik Adjust error checking and cleaning to Linux driver: * add checking if MTU is to small, * fix error messages (mismatched Rx and Tx), * return error received from base driver or proper error code instead of -1, * in case of error release occupied resources, * in case of Rx error trigger NIC reset. Signed-off-by: Rafal Kozik Acked-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 158 ++++++++++++++++++++++++++++--------------- drivers/net/ena/ena_ethdev.h | 2 + 2 files changed, 104 insertions(+), 56 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 1425a0496..c051f23a3 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -436,9 +436,12 @@ static void ena_config_host_info(struct ena_com_dev *ena_dev) rc = ena_com_set_host_attributes(ena_dev); if (rc) { - RTE_LOG(ERR, PMD, "Cannot set host attributes\n"); - if (rc != -ENA_COM_UNSUPPORTED) - goto err; + if (rc == -ENA_COM_UNSUPPORTED) + RTE_LOG(WARNING, PMD, "Cannot set host attributes\n"); + else + RTE_LOG(ERR, PMD, "Cannot set host attributes\n"); + + goto err; } return; @@ -489,9 +492,12 @@ static void ena_config_debug_area(struct ena_adapter *adapter) rc = ena_com_set_host_attributes(&adapter->ena_dev); if (rc) { - RTE_LOG(WARNING, PMD, "Cannot set host attributes\n"); - if (rc != -ENA_COM_UNSUPPORTED) - goto err; + if (rc == -ENA_COM_UNSUPPORTED) + RTE_LOG(WARNING, PMD, "Cannot set host attributes\n"); + else + RTE_LOG(ERR, PMD, "Cannot set host attributes\n"); + + goto err; } return; @@ -534,7 +540,9 @@ ena_dev_reset(struct rte_eth_dev *dev) ena_com_set_admin_running_state(ena_dev, false); - ena_com_dev_reset(ena_dev, adapter->reset_reason); + rc = ena_com_dev_reset(ena_dev, adapter->reset_reason); + if (rc) + RTE_LOG(ERR, PMD, "Device reset failed\n"); for (i = 0; i < nb_queues; i++) mb_pool_rx[i] = adapter->rx_ring[i].mb_pool; @@ -579,7 +587,7 @@ static int ena_rss_reta_update(struct rte_eth_dev *dev, struct ena_adapter *adapter = (struct ena_adapter *)(dev->data->dev_private); struct ena_com_dev *ena_dev = &adapter->ena_dev; - int ret, i; + int rc, i; u16 entry_value; int conf_idx; int idx; @@ -591,8 +599,7 @@ static int ena_rss_reta_update(struct rte_eth_dev *dev, RTE_LOG(WARNING, PMD, "indirection table %d is bigger than supported (%d)\n", reta_size, ENA_RX_RSS_TABLE_SIZE); - ret = -EINVAL; - goto err; + return -EINVAL; } for (i = 0 ; i < reta_size ; i++) { @@ -604,29 +611,28 @@ static int ena_rss_reta_update(struct rte_eth_dev *dev, if (TEST_BIT(reta_conf[conf_idx].mask, idx)) { entry_value = ENA_IO_RXQ_IDX(reta_conf[conf_idx].reta[idx]); - ret = ena_com_indirect_table_fill_entry(ena_dev, - i, - entry_value); - if (unlikely(ret && (ret != ENA_COM_UNSUPPORTED))) { + + rc = ena_com_indirect_table_fill_entry(ena_dev, + i, + entry_value); + if (unlikely(rc && rc != ENA_COM_UNSUPPORTED)) { RTE_LOG(ERR, PMD, "Cannot fill indirect table\n"); - ret = -ENOTSUP; - goto err; + return rc; } } } - ret = ena_com_indirect_table_set(ena_dev); - if (unlikely(ret && (ret != ENA_COM_UNSUPPORTED))) { + rc = ena_com_indirect_table_set(ena_dev); + if (unlikely(rc && rc != ENA_COM_UNSUPPORTED)) { RTE_LOG(ERR, PMD, "Cannot flush the indirect table\n"); - ret = -ENOTSUP; - goto err; + return rc; } RTE_LOG(DEBUG, PMD, "%s(): RSS configured %d entries for port %d\n", __func__, reta_size, adapter->rte_dev->data->port_id); -err: - return ret; + + return 0; } /* Query redirection table. */ @@ -637,7 +643,7 @@ static int ena_rss_reta_query(struct rte_eth_dev *dev, struct ena_adapter *adapter = (struct ena_adapter *)(dev->data->dev_private); struct ena_com_dev *ena_dev = &adapter->ena_dev; - int ret; + int rc; int i; u32 indirect_table[ENA_RX_RSS_TABLE_SIZE] = {0}; int reta_conf_idx; @@ -647,11 +653,10 @@ static int ena_rss_reta_query(struct rte_eth_dev *dev, (reta_size > RTE_RETA_GROUP_SIZE && ((reta_conf + 1) == NULL))) return -EINVAL; - ret = ena_com_indirect_table_get(ena_dev, indirect_table); - if (unlikely(ret && (ret != ENA_COM_UNSUPPORTED))) { + rc = ena_com_indirect_table_get(ena_dev, indirect_table); + if (unlikely(rc && rc != ENA_COM_UNSUPPORTED)) { RTE_LOG(ERR, PMD, "cannot get indirect table\n"); - ret = -ENOTSUP; - goto err; + return -ENOTSUP; } for (i = 0 ; i < reta_size ; i++) { @@ -661,8 +666,8 @@ static int ena_rss_reta_query(struct rte_eth_dev *dev, reta_conf[reta_conf_idx].reta[reta_idx] = ENA_IO_RXQ_IDX_REV(indirect_table[i]); } -err: - return ret; + + return 0; } static int ena_rss_init_default(struct ena_adapter *adapter) @@ -884,7 +889,7 @@ static int ena_queue_restart_all(struct rte_eth_dev *dev, PMD_INIT_LOG(ERR, "failed to restart queue %d type(%d)", i, ring_type); - return -1; + return rc; } } } @@ -908,9 +913,11 @@ static int ena_check_valid_conf(struct ena_adapter *adapter) { uint32_t max_frame_len = ena_get_mtu_conf(adapter); - if (max_frame_len > adapter->max_mtu) { - PMD_INIT_LOG(ERR, "Unsupported MTU of %d", max_frame_len); - return -1; + if (max_frame_len > adapter->max_mtu || max_frame_len < ENA_MIN_MTU) { + PMD_INIT_LOG(ERR, "Unsupported MTU of %d. " + "max mtu: %d, min mtu: %d\n", + max_frame_len, adapter->max_mtu, ENA_MIN_MTU); + return ENA_COM_UNSUPPORTED; } return 0; @@ -1008,12 +1015,12 @@ static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) ena_dev = &adapter->ena_dev; ena_assert_msg(ena_dev != NULL, "Uninitialized device"); - if (mtu > ena_get_mtu_conf(adapter)) { + if (mtu > ena_get_mtu_conf(adapter) || mtu < ENA_MIN_MTU) { RTE_LOG(ERR, PMD, - "Given MTU (%d) exceeds maximum MTU supported (%d)\n", - mtu, ena_get_mtu_conf(adapter)); - rc = -EINVAL; - goto err; + "Invalid MTU setting. new_mtu: %d " + "max mtu: %d min mtu: %d\n", + mtu, ena_get_mtu_conf(adapter), ENA_MIN_MTU); + return -EINVAL; } rc = ena_com_set_dev_mtu(ena_dev, mtu); @@ -1022,7 +1029,6 @@ static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) else RTE_LOG(NOTICE, PMD, "Set MTU: %d\n", mtu); -err: return rc; } @@ -1093,7 +1099,7 @@ static int ena_queue_restart(struct ena_ring *ring) rc = ena_populate_rx_queue(ring, bufs_num); if (rc != bufs_num) { PMD_INIT_LOG(ERR, "Failed to populate rx ring !"); - return (-1); + return ENA_COM_FAULT; } return 0; @@ -1123,12 +1129,12 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, RTE_LOG(CRIT, PMD, "API violation. Queue %d is already configured\n", queue_idx); - return -1; + return ENA_COM_FAULT; } if (!rte_is_power_of_2(nb_desc)) { RTE_LOG(ERR, PMD, - "Unsupported size of RX queue: %d is not a power of 2.", + "Unsupported size of TX queue: %d is not a power of 2.", nb_desc); return -EINVAL; } @@ -1154,6 +1160,7 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, RTE_LOG(ERR, PMD, "failed to create io TX queue #%d (qid:%d) rc: %d\n", queue_idx, ena_qid, rc); + return rc; } txq->ena_com_io_cq = &ena_dev->io_cq_queues[ena_qid]; txq->ena_com_io_sq = &ena_dev->io_sq_queues[ena_qid]; @@ -1165,8 +1172,7 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, RTE_LOG(ERR, PMD, "Failed to get TX queue handlers. TX queue num %d rc: %d\n", queue_idx, rc); - ena_com_destroy_io_queue(ena_dev, ena_qid); - goto err; + goto err_destroy_io_queue; } txq->port_id = dev->data->port_id; @@ -1180,7 +1186,8 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, RTE_CACHE_LINE_SIZE); if (!txq->tx_buffer_info) { RTE_LOG(ERR, PMD, "failed to alloc mem for tx buffer info\n"); - return -ENOMEM; + rc = -ENOMEM; + goto err_destroy_io_queue; } txq->empty_tx_reqs = rte_zmalloc("txq->empty_tx_reqs", @@ -1188,9 +1195,10 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, RTE_CACHE_LINE_SIZE); if (!txq->empty_tx_reqs) { RTE_LOG(ERR, PMD, "failed to alloc mem for tx reqs\n"); - rte_free(txq->tx_buffer_info); - return -ENOMEM; + rc = -ENOMEM; + goto err_free; } + for (i = 0; i < txq->ring_size; i++) txq->empty_tx_reqs[i] = i; @@ -1202,7 +1210,14 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, /* Store pointer to this queue in upper layer */ txq->configured = 1; dev->data->tx_queues[queue_idx] = txq; -err: + + return 0; + +err_free: + rte_free(txq->tx_buffer_info); + +err_destroy_io_queue: + ena_com_destroy_io_queue(ena_dev, ena_qid); return rc; } @@ -1229,12 +1244,12 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, RTE_LOG(CRIT, PMD, "API violation. Queue %d is already configured\n", queue_idx); - return -1; + return ENA_COM_FAULT; } if (!rte_is_power_of_2(nb_desc)) { RTE_LOG(ERR, PMD, - "Unsupported size of TX queue: %d is not a power of 2.", + "Unsupported size of RX queue: %d is not a power of 2.", nb_desc); return -EINVAL; } @@ -1256,9 +1271,11 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, ctx.numa_node = ena_cpu_to_node(queue_idx); rc = ena_com_create_io_queue(ena_dev, &ctx); - if (rc) + if (rc) { RTE_LOG(ERR, PMD, "failed to create io RX queue #%d rc: %d\n", queue_idx, rc); + return rc; + } rxq->ena_com_io_cq = &ena_dev->io_cq_queues[ena_qid]; rxq->ena_com_io_sq = &ena_dev->io_sq_queues[ena_qid]; @@ -1271,6 +1288,7 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, "Failed to get RX queue handlers. RX queue num %d rc: %d\n", queue_idx, rc); ena_com_destroy_io_queue(ena_dev, ena_qid); + return rc; } rxq->port_id = dev->data->port_id; @@ -1284,6 +1302,7 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, RTE_CACHE_LINE_SIZE); if (!rxq->rx_buffer_info) { RTE_LOG(ERR, PMD, "failed to alloc mem for rx buffer info\n"); + ena_com_destroy_io_queue(ena_dev, ena_qid); return -ENOMEM; } @@ -1294,6 +1313,7 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, RTE_LOG(ERR, PMD, "failed to alloc mem for empty rx reqs\n"); rte_free(rxq->rx_buffer_info); rxq->rx_buffer_info = NULL; + ena_com_destroy_io_queue(ena_dev, ena_qid); return -ENOMEM; } @@ -1344,6 +1364,10 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) rte_prefetch0(mbufs[((next_to_use + 4) & ring_mask)]); req_id = rxq->empty_rx_reqs[next_to_use_masked]; + rc = validate_rx_req_id(rxq, req_id); + if (unlikely(rc < 0)) + break; + /* prepare physical address for DMA transaction */ ebuf.paddr = mbuf->buf_iova + RTE_PKTMBUF_HEADROOM; ebuf.len = mbuf->buf_len - RTE_PKTMBUF_HEADROOM; @@ -1359,9 +1383,17 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) next_to_use++; } + if (unlikely(i < count)) + RTE_LOG(WARNING, PMD, "refilled rx qid %d with only %d " + "buffers (from %d)\n", rxq->id, i, count); + /* When we submitted free recources to device... */ if (likely(i > 0)) { - /* ...let HW know that it can fill buffers with data */ + /* ...let HW know that it can fill buffers with data + * + * Add memory barrier to make sure the desc were written before + * issue a doorbell + */ rte_wmb(); ena_com_write_sq_doorbell(rxq->ena_com_io_sq); @@ -1589,7 +1621,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) rc = ena_device_init(ena_dev, &get_feat_ctx, &wd_state); if (rc) { PMD_INIT_LOG(CRIT, "Failed to init ENA device"); - return -1; + goto err; } adapter->wd_state = wd_state; @@ -1598,8 +1630,10 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) &get_feat_ctx); queue_size = ena_calc_queue_size(ena_dev, &tx_sgl_size, &get_feat_ctx); - if ((queue_size <= 0) || (adapter->num_queues <= 0)) - return -EFAULT; + if (queue_size <= 0 || adapter->num_queues <= 0) { + rc = -EFAULT; + goto err_device_destroy; + } adapter->tx_ring_size = queue_size; adapter->rx_ring_size = queue_size; @@ -1628,7 +1662,8 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) RTE_CACHE_LINE_SIZE); if (!adapter->drv_stats) { RTE_LOG(ERR, PMD, "failed to alloc mem for adapter stats\n"); - return -ENOMEM; + rc = -ENOMEM; + goto err_delete_debug_area; } rte_intr_callback_register(intr_handle, @@ -1646,6 +1681,16 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) adapter->state = ENA_ADAPTER_STATE_INIT; return 0; + +err_delete_debug_area: + ena_com_delete_debug_area(ena_dev); + +err_device_destroy: + ena_com_delete_host_info(ena_dev); + ena_com_admin_destroy(ena_dev); + +err: + return rc; } static int eth_ena_dev_uninit(struct rte_eth_dev *eth_dev) @@ -1839,6 +1884,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, &ena_rx_ctx); if (unlikely(rc)) { RTE_LOG(ERR, PMD, "ena_com_rx_pkt error %d\n", rc); + rx_ring->adapter->trigger_reset = true; return 0; } diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 73c110ab9..2dc8129e0 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -50,6 +50,8 @@ #define ENA_NAME_MAX_LEN 20 #define ENA_PKT_MAX_BUFS 17 +#define ENA_MIN_MTU 128 + #define ENA_MMIO_DISABLE_REG_READ BIT(0) #define ENA_WD_TIMEOUT_SEC 3 From patchwork Thu Jun 7 09:43:17 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40735 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 70E0B1B3EF; Thu, 7 Jun 2018 11:44:16 +0200 (CEST) Received: from mail-lf0-f65.google.com (mail-lf0-f65.google.com [209.85.215.65]) by dpdk.org (Postfix) with ESMTP id 9B0A91B1B7 for ; Thu, 7 Jun 2018 11:44:14 +0200 (CEST) Received: by mail-lf0-f65.google.com with SMTP id g21-v6so11671076lfb.4 for ; Thu, 07 Jun 2018 02:44:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=JiXedJLiU3BFWjU6kN3NsPAsCo1IrizfnSbmgycI2Tc=; b=MtRj9PgIuSi2lvAu+uCWDAC7ni+EosbR9v1YFHi3OB+u8829iEsxoFvJM4bjFP+mNK 6HzVmaHLT4diiFi58k2d6UNXGdLs2+aqs+kKo1vz9kJvtuk0oA9E2Sh8K2Eb82hxGcij vhOrR+6EG2Rt4EnomdzhKmKafa8FBpBqCETq4RXgGbJf3t6zE9WqVFBIBJXTm3a2srrW 1ZYaNH6INIZ7/AkDI744rsEvp/HUdcrAd1iF9hXrAzRFrxEi6yIkDBefrvwnrKQ4Y1To vaVTF5WHlU9ZtZ3+Y4DwIEQGVw1bOMny2k5Cu5FKAj/JigM2nWifQr9WNzioQexe5WdU UK0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=JiXedJLiU3BFWjU6kN3NsPAsCo1IrizfnSbmgycI2Tc=; b=Frz35fQzT7b5XdXcWrJsRyH08a8PjAunxJzlQpg4kllv3JWtotJMPGbjFqh7KuuLnB xcUB6iDCdhESz3rvL0lU4jkazs3F4mJJorOVS1wAyJsEOU41+L1HrX2AwYOruWz+Ykds pvj7s35p3miyQy4J53m+a3s0X+zUvWwBDLx8shd/eSCvOY+ano/Da74PrRdzuP8ypjbv eM9JLirIQ8ei0JDQo9spHLIj36xWmMy1pHYGR61E5ZODwmA30Qa2Kp3AEr3WMDjFROpG 7mBNGbaAJNMu7gW748NIVuXX3uCQVEsjDAQccEu8QwmFyTjUoy/KtBkenzkYSMI1/XBj KMbw== X-Gm-Message-State: APt69E1QCB1mNxUbbC/UW95/VRfKsXNSnm+J+kf/zWg6rnjFafxLnGUk sbJF721IfkZI2e7Fa4SAFjHX7w== X-Google-Smtp-Source: ADUXVKJsgNX2TUJ15usgJJdmNdKLpr7B2D1JXIWHRu/8YTtMEQpDXLNWczvfHrBwNw6HWLHi/wbevg== X-Received: by 2002:a19:188a:: with SMTP id 10-v6mr888586lfy.26.1528364654298; Thu, 07 Jun 2018 02:44:14 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.44.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:44:11 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com, Rafal Kozik Date: Thu, 7 Jun 2018 11:43:17 +0200 Message-Id: <20180607094322.14312-22-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 22/27] net/ena: update numa node X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rafal Kozik During initializing tx queues update Non-Uniform Memory Access configuration in NIC firmware. Signed-off-by: Rafal Kozik Acked-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index c051f23a3..5c3b6494f 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -1175,6 +1175,8 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, goto err_destroy_io_queue; } + ena_com_update_numa_node(txq->ena_com_io_cq, ctx.numa_node); + txq->port_id = dev->data->port_id; txq->next_to_clean = 0; txq->next_to_use = 0; From patchwork Thu Jun 7 09:43:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40736 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EAEB41B623; Thu, 7 Jun 2018 11:44:18 +0200 (CEST) Received: from mail-lf0-f50.google.com (mail-lf0-f50.google.com [209.85.215.50]) by dpdk.org (Postfix) with ESMTP id DD3281B3F0 for ; Thu, 7 Jun 2018 11:44:17 +0200 (CEST) Received: by mail-lf0-f50.google.com with SMTP id i83-v6so13657396lfh.5 for ; Thu, 07 Jun 2018 02:44:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=SAtXlHMV0FBsEo0Xi1UinCpHnc+YFrmUOsBuiGZuUnk=; b=yrb5W1cvmt2ABxEC7hgN76KSqofkncY8NNqpo+ZI60YRXHLhQn2ZZVeBCedfpDLIUY 0gCgsTUfPcneGblrKfj0SUT3BESRlF0PQnGATRzdARmJJ1hnJ3Lz7J6HRG+ocfngNgiT Gk1Y9mPkFkYVv3ZPZXGJNrKAe+Aa/Ti1lHZU92sBwxFCBTfS3v47MBMNfczqTw5kBAja pc7cUAxGkOR/TNCAARvDV36xrg04x8HFt98mLOpmGQfmgddRTz6v6hHeXXjBYMfiUIlt ocynAbG0kz6g73j/jq2PUCNZSfP5qazfzbnJYnCIExO7hXy2tnSniHrmJr8ct72mnOMo Kjww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=SAtXlHMV0FBsEo0Xi1UinCpHnc+YFrmUOsBuiGZuUnk=; b=TqncqVdbeYVUWqhyrNwylv9iotV51UWEItkxgu3/4DUPwRoH0yJPTbjMPi9FYB6qDk 0MK109wD+AaAw4rF5W5ft1SJt+1Q/XTVs+pQgvjoJhrRKLsChBUTba2MQRLIxLwfv38D 1FQIfqrhwXrMaID7QvoW8RyPk7lP6gVJoEC1QmrR6uuk1Tc5GlLSaJLNJong1xSqevUh 69iZ+mwq+03R0wepACjJVlyZJEV5I3EQZXaSgoS0ffN7Bvr15Hj6VdnV86tZoRe3FHaI uKrFwsQiYwhFG9MF5MFMSgMEimfy/8rCvAvizlQLW2vt6fc2lKOmQwKeLO54fD9YjoEz XWcg== X-Gm-Message-State: APt69E17vP7rWEspWQQ9mbVN7TUfDP+8OLo5P8DNY4ie2Mt+jN290hP5 rig9gGFjoHb26KyYQYxSKgeFtQ== X-Google-Smtp-Source: ADUXVKJXpsBLszHnd5HZgBQJnTkT37Jz4bscldVBGZQ1oDuJaGaSj0kU4U3KkcrebibtKcrd4gFHYQ== X-Received: by 2002:a19:c1c1:: with SMTP id r184-v6mr875168lff.51.1528364657566; Thu, 07 Jun 2018 02:44:17 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.44.14 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:44:15 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com, Rafal Kozik Date: Thu, 7 Jun 2018 11:43:18 +0200 Message-Id: <20180607094322.14312-23-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 23/27] net/ena: check pointer before memset X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rafal Kozik Need to check if memory allocation succeed before using it. Using memset on NULL pointer cause segfault. Fixes: 9ba7981ec992 ("ena: add communication layer for DPDK") Signed-off-by: Rafal Kozik Acked-by: Michal Krawczyk --- drivers/net/ena/base/ena_plat_dpdk.h | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/drivers/net/ena/base/ena_plat_dpdk.h b/drivers/net/ena/base/ena_plat_dpdk.h index e2af4ee2c..d30153e92 100644 --- a/drivers/net/ena/base/ena_plat_dpdk.h +++ b/drivers/net/ena/base/ena_plat_dpdk.h @@ -210,10 +210,15 @@ extern uint32_t ena_alloc_cnt; "ena_alloc_%d", ena_alloc_cnt++); \ mz = rte_memzone_reserve(z_name, size, SOCKET_ID_ANY, \ RTE_MEMZONE_IOVA_CONTIG); \ - memset(mz->addr, 0, size); \ - virt = mz->addr; \ - phys = mz->iova; \ handle = mz; \ + if (mz == NULL) { \ + virt = NULL; \ + phys = 0; \ + } else { \ + memset(mz->addr, 0, size); \ + virt = mz->addr; \ + phys = mz->iova; \ + } \ } while (0) #define ENA_MEM_FREE_COHERENT(dmadev, size, virt, phys, handle) \ ({ ENA_TOUCH(size); ENA_TOUCH(phys); \ @@ -230,9 +235,14 @@ extern uint32_t ena_alloc_cnt; "ena_alloc_%d", ena_alloc_cnt++); \ mz = rte_memzone_reserve(z_name, size, node, \ RTE_MEMZONE_IOVA_CONTIG); \ - memset(mz->addr, 0, size); \ - virt = mz->addr; \ - phys = mz->iova; \ + if (mz == NULL) { \ + virt = NULL; \ + phys = 0; \ + } else { \ + memset(mz->addr, 0, size); \ + virt = mz->addr; \ + phys = mz->iova; \ + } \ (void)mem_handle; \ } while (0) From patchwork Thu Jun 7 09:43:19 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40737 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A83B41066; Thu, 7 Jun 2018 11:44:22 +0200 (CEST) Received: from mail-lf0-f65.google.com (mail-lf0-f65.google.com [209.85.215.65]) by dpdk.org (Postfix) with ESMTP id 3B35D23A for ; Thu, 7 Jun 2018 11:44:21 +0200 (CEST) Received: by mail-lf0-f65.google.com with SMTP id j13-v6so13640503lfb.13 for ; Thu, 07 Jun 2018 02:44:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=iuNYJm2f3Qqb0zkpxWGvb6JEx0BLKtjH6YhnsrScW00=; b=ihFQKSIv4Y9xj99jAz+5/HxB2aaMnqViQuAkLrki1MbZPMH/IJ51KUVtK5bN/M5WCa cDKpFqUt+LdIh5o9cQ9cYaX5POxQT4Out5V2SA1qzBIn5RAb8SckR+edpmNEQfK1BQY2 oe027MHUFdd7fGRijftIniqsNuZfe127QiZlhc8qAykp9Uzx/l45nZYwWOdsvfH8Qm1G uJi9U7pgnIaaqODCLpN6UHJbXT3DD0AeoQUrx7dC5Jgmi1PX+Nyqk2eadQKcJAmsWfDN NrR5RLcjn5UWMrnQhaNVc5AwmMoqyooQrO4A7VvGioz0rZIZHchio43zVaSFG83zIIlK ZzIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=iuNYJm2f3Qqb0zkpxWGvb6JEx0BLKtjH6YhnsrScW00=; b=X2XS3o4Mdfivt/L/U41nhszzFbkTkwIKCIA55jTi9sx91nilf/4WLuHKquGNws4S5F c94rn4CPsoCTPXlRhSgLLOZz+IaI1Eevm1ndyzMag2Rk02Pt6op4dP2nFm2jb18CRjOV V7s19u+IX0v3yQRI/lEGS1oN8h9zJXsFCgsBahk/bg7LhRcoeeEX1fYG6HN6acgFzVZO DH68vCQ6I7fWNszrsgUdXWZLt/HiCDo/CHBYhTEzb9uVLfNtjQxhrVHHW1bHPXXuqLGM iBvRPcrIBAyJfmSILlAacPlVPcAsZxhAZeCodIcZaVEynxDeEtWvqmu8pU8SmZYxUCnJ liFA== X-Gm-Message-State: APt69E2IuzP52sEqC86wAI5Os7LsNMFt47NhiJbD8ugTjBw975nsAKW8 RMOWD2sEoJ0+wLD0FdnTaf5eAw== X-Google-Smtp-Source: ADUXVKJbJzwndzdBbv+sjVtRynVHwgeLQNBQPMzq+qksor8N8PezxyQDlWyBcLLhlUL9zwKP8yGd6Q== X-Received: by 2002:a19:1797:: with SMTP id 23-v6mr859921lfx.95.1528364660832; Thu, 07 Jun 2018 02:44:20 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.44.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:44:18 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com, Rafal Kozik Date: Thu, 7 Jun 2018 11:43:19 +0200 Message-Id: <20180607094322.14312-24-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 24/27] net/ena: change memory type X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rafal Kozik ENA_MEM_ALLOC_NODE not need to use contiguous physical memory. Also using memset without checking if allocation succeed can cause segmentation fault. To avoid both issue use rte_zmalloc_socket. Fixes: 3d3edc265fc8 ("net/ena: make coherent memory allocation NUMA-aware") Signed-off-by: Rafal Kozik Acked-by: Michal Krawczyk --- drivers/net/ena/base/ena_plat_dpdk.h | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/drivers/net/ena/base/ena_plat_dpdk.h b/drivers/net/ena/base/ena_plat_dpdk.h index d30153e92..22d7a9cb1 100644 --- a/drivers/net/ena/base/ena_plat_dpdk.h +++ b/drivers/net/ena/base/ena_plat_dpdk.h @@ -248,15 +248,8 @@ extern uint32_t ena_alloc_cnt; #define ENA_MEM_ALLOC_NODE(dmadev, size, virt, node, dev_node) \ do { \ - const struct rte_memzone *mz; \ - char z_name[RTE_MEMZONE_NAMESIZE]; \ ENA_TOUCH(dmadev); ENA_TOUCH(dev_node); \ - snprintf(z_name, sizeof(z_name), \ - "ena_alloc_%d", ena_alloc_cnt++); \ - mz = rte_memzone_reserve(z_name, size, node, \ - RTE_MEMZONE_IOVA_CONTIG); \ - memset(mz->addr, 0, size); \ - virt = mz->addr; \ + virt = rte_zmalloc_socket(NULL, size, 0, node); \ } while (0) #define ENA_MEM_ALLOC(dmadev, size) rte_zmalloc(NULL, size, 1) From patchwork Thu Jun 7 09:43:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40738 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5CB9C4CC3; Thu, 7 Jun 2018 11:44:25 +0200 (CEST) Received: from mail-lf0-f49.google.com (mail-lf0-f49.google.com [209.85.215.49]) by dpdk.org (Postfix) with ESMTP id 73CA75593 for ; Thu, 7 Jun 2018 11:44:24 +0200 (CEST) Received: by mail-lf0-f49.google.com with SMTP id i83-v6so13657874lfh.5 for ; Thu, 07 Jun 2018 02:44:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=6aPS9zacyRt/YZZqxi0ljp+6utQ50MSMnmhk9MI23ng=; b=2Q4EtMI+68rsCjORJ+WStOV/1+X7+sMrtoP3MLlh8fBGoHhyAusiamIsRuBlMHltYX s0IQEnA/otf17RRSaDCl6ctLbkzoyqUv34HIk+5tn7NSvn0KjBF9xo1uxsQQO0imZ2yp oH5dRRRouQnnBWbFU9KlN3tzX/CUxHLVSrmlv81nlbvtRwQzjtcwLltt/BkAxqxNPY09 Fk7CaskbTP+i+BpZBTFVO2e2WuRoPEmVLvZyrI+HtKrS9tF2+MZ9f5qq92Xwe1w+6RXW WWsfpuriOn7XN76b21wsKs1vJBt8kA3VnKrT66eTjjvX01va/GgfTNeQNY4f3JAMyd6Y i96A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=6aPS9zacyRt/YZZqxi0ljp+6utQ50MSMnmhk9MI23ng=; b=DZrNeBA/Bl1vGT05KLihtEazBmDZ/r9d1uj33GZrp30ix4fhmVFhDxE+Rri0YtLIP0 DLn3yP3Ldq6Xhy6IDw29h24PQ5D6IeeTYDa6RhiJUtj++rmtwAtIWcezxZ4prKUH7oCj MIOw0lof0XtWlV3lQZqbINbmfjUjIdB3Fe8+Mvq2KMlOvdXDe4Fd2TnMA3OtG1oxUxju OrwOD20CvyAec3jADmBwEojwXlyppoFnJy14Fao3u+hVdLJjBuVpfPfVtKjG11Q/1nQy gFklCbbWZvbWwE6aA8VWOC5F3aFL2dtXNqkmGL4IeXmJeQxwTYqNCVtr+a43/Lo7zQSy LvfQ== X-Gm-Message-State: APt69E3qIXgdJnOZisIBrlOjvo7BlmwhPf1ak94sjDhISPk/DO26YPRb rLkn75DOiw9x9wNB0v5SRFg7hU6Zbvk= X-Google-Smtp-Source: ADUXVKJS5WWvbIbQjRdLsKPdc7fUtM0t2rzpSZpdmj0MLlUAYnK8hJgYgCjl0XDtnf2jByrMtVI8bg== X-Received: by 2002:a2e:8257:: with SMTP id j23-v6mr943409ljh.1.1528364664138; Thu, 07 Jun 2018 02:44:24 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.44.20 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:44:22 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com, Rafal Kozik Date: Thu, 7 Jun 2018 11:43:20 +0200 Message-Id: <20180607094322.14312-25-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 25/27] net/ena: fix GENMASK_ULL macro X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rafal Kozik When use GENMASK_ULL(63,0) left shift by 64 bits is performed. Shifting by number greater or equal then word length is undefined operation and failed on some platforms. Fixes: 9ba7981ec992 ("ena: add communication layer for DPDK") Signed-off-by: Rafal Kozik Acked-by: Michal Krawczyk --- drivers/net/ena/base/ena_plat_dpdk.h | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/net/ena/base/ena_plat_dpdk.h b/drivers/net/ena/base/ena_plat_dpdk.h index 22d7a9cb1..8a04e84b9 100644 --- a/drivers/net/ena/base/ena_plat_dpdk.h +++ b/drivers/net/ena/base/ena_plat_dpdk.h @@ -116,11 +116,13 @@ typedef uint64_t dma_addr_t; #define ENA_MIN16(x, y) RTE_MIN((x), (y)) #define ENA_MIN8(x, y) RTE_MIN((x), (y)) +#define BITS_PER_LONG_LONG (__SIZEOF_LONG_LONG__ * 8) #define U64_C(x) x ## ULL #define BIT(nr) (1UL << (nr)) #define BITS_PER_LONG (__SIZEOF_LONG__ * 8) #define GENMASK(h, l) (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h)))) -#define GENMASK_ULL(h, l) (((U64_C(1) << ((h) - (l) + 1)) - 1) << (l)) +#define GENMASK_ULL(h, l) (((~0ULL) - (1ULL << (l)) + 1) & \ + (~0ULL >> (BITS_PER_LONG_LONG - 1 - (h)))) #ifdef RTE_LIBRTE_ENA_COM_DEBUG #define ena_trc_dbg(format, arg...) \ From patchwork Thu Jun 7 09:43:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40739 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0078E1B422; Thu, 7 Jun 2018 11:44:29 +0200 (CEST) Received: from mail-lf0-f66.google.com (mail-lf0-f66.google.com [209.85.215.66]) by dpdk.org (Postfix) with ESMTP id 12EDD1B1F9 for ; Thu, 7 Jun 2018 11:44:27 +0200 (CEST) Received: by mail-lf0-f66.google.com with SMTP id o9-v6so13665830lfk.1 for ; Thu, 07 Jun 2018 02:44:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=+AjmDI6aZ+QubT5ck1FpgVnjT3GHwZVKE5nMgqdKLD8=; b=ej3LFw1qaZ1RNHpoamJBn5BuKchgXtME0+J4HW47PDt/0XbE9S3I4hJvkDcC4yunNe TAkjJHhEdOCBdyIWEMYc7NDuxx+mZBTPK3mR2FbZVha8mczb+C3W5MJU8oR2fhAEurEY v8zXeKJCDdyuAeJZB//t2snv2kp6vedZ3daDtKQ7xNjxOAoSqUVUXVShTjzwUEsbtAex jvpKcG5GoD577uCfXG9g7TSuYBiL/CwhXyXsu80S+lG2bhAIXP+JTvCUEkHSMAEzq7q3 pMG3LQjbHiuRktfN9yFEvImncVRo1EoxcTtIswoyvwgVq1nQDiynsCzqh5Xe4yw15SmV q8oQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+AjmDI6aZ+QubT5ck1FpgVnjT3GHwZVKE5nMgqdKLD8=; b=JcvDZ5Zw6FASN+vit6pn6hOwYcblBLdaiN7LdzA1DghFJS6JV791zWk66ahVaUTnQu xDP7YNCBqpse0evFphTCbx1vd7MHJQcgEEkeOoZjmu5eJiv6b0Vvnc1PpdK14ufh3K/f HfIngZ7TIiBRm95Mc5X6hAB0ZzQNxpZEeLGkYTT4w9SUPYqxACKIW16OcPmFvS8pZepR QBIUoOr+0Fxfnm/qicKG+isG1x8EDSLmwx9jemQ8Ryi4r+dztPNg581gg1S8Gl6lzBW8 aeKmSJOaWEf226K9Ud0p7Io/wFtUHmUahihnv76L5jVIL6536YwGcthBXsPyOvPwQmI4 WaGA== X-Gm-Message-State: APt69E2WbBaY4myQ2zIwfiPys1QDcJk6nzuDHCOSoDmS++XTGarPhP/T YaaXgwx+bdgDG02B5VvKHIIZfw== X-Google-Smtp-Source: ADUXVKL9ce+wsaWHxlCkd0Na4113kt4u5bt9H60qGmyyyz+lj1odHSKEDrQEZLt9EYcmCrNJ1fAttQ== X-Received: by 2002:a19:9dca:: with SMTP id g193-v6mr865415lfe.65.1528364666738; Thu, 07 Jun 2018 02:44:26 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.44.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:44:24 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com Date: Thu, 7 Jun 2018 11:43:21 +0200 Message-Id: <20180607094322.14312-26-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 26/27] net/ena: store handle after memory allocation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The pointer received from rte_memzone_reserve from macro ENA_MEM_ALLOC_COHERENT_NODE was not stored anywhere, and as a result memory allocated by this macro could not been released. Signed-off-by: Michal Krawczyk --- drivers/net/ena/base/ena_plat_dpdk.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ena/base/ena_plat_dpdk.h b/drivers/net/ena/base/ena_plat_dpdk.h index 8a04e84b9..900ba1a6b 100644 --- a/drivers/net/ena/base/ena_plat_dpdk.h +++ b/drivers/net/ena/base/ena_plat_dpdk.h @@ -237,6 +237,7 @@ extern uint32_t ena_alloc_cnt; "ena_alloc_%d", ena_alloc_cnt++); \ mz = rte_memzone_reserve(z_name, size, node, \ RTE_MEMZONE_IOVA_CONTIG); \ + mem_handle = mz; \ if (mz == NULL) { \ virt = NULL; \ phys = 0; \ @@ -245,7 +246,6 @@ extern uint32_t ena_alloc_cnt; virt = mz->addr; \ phys = mz->iova; \ } \ - (void)mem_handle; \ } while (0) #define ENA_MEM_ALLOC_NODE(dmadev, size, virt, node, dev_node) \ From patchwork Thu Jun 7 09:43:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40741 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 49B651B631; Thu, 7 Jun 2018 11:44:30 +0200 (CEST) Received: from mail-lf0-f65.google.com (mail-lf0-f65.google.com [209.85.215.65]) by dpdk.org (Postfix) with ESMTP id 4769B1B62E for ; Thu, 7 Jun 2018 11:44:29 +0200 (CEST) Received: by mail-lf0-f65.google.com with SMTP id v135-v6so13652115lfa.9 for ; Thu, 07 Jun 2018 02:44:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=DjYLMuGpZUJfmfzt7jxk5dM2RJqZlgT4mwS024HyhTI=; b=YQ11l3LmR4iLUZ2pKjftQGxeeCF44KE/qpwJ6e7aRGIDVaPbqzbV+F9zZikP//wrCe tMSYgi1XFnNu8JM66lk9PifxDmJh5/xD27ybQmadcXUS1eiA7SnaBchAj/4kC//rNSJK ar7Um7Jdz28jy7JfrZz2O+2bYN4/yvksaI7BEz9TxhPPr6MFciOnUs2SSVW9vZLxKy/T ZkYxzI6edCIji8ug/OYSvli3ylvGyzu1OERHCpLgAOtviok/JglPpM5n9k0VX2ypfQZU wARNrYQL1fvpSXtbpVAQW50i/zCiKjS3NNM4HrdqQZGWeRKvGQ6P35O1YYbV4r6anMi2 Ouqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=DjYLMuGpZUJfmfzt7jxk5dM2RJqZlgT4mwS024HyhTI=; b=gdc9C+hsbAD3mJUauazbEihy/WGAP9WkdI9ISftjRhrlYYe4RtDWjekza6Dc5cGrTB a6Y0nTc72ecKZC32bdpXzyJ1hP6OsGZacgBVbiM/gREa99cIs7hq9OSmHz2tujyYtZBB gEAing9aQZDCpTEJs96x7I3/+gx4vYtycFs8GdI0r1lfo+11KVwvvfNfgwj1zZBVMz+z WxDPEkUa7WQi+mHtTtHXkHlOyk3HzdBowQrg1FdlRSkDkqd9Dpq590bn53rsX2WxQJEI aJDY3NCNaS19AYKM8ZmHGt5Fs4Xs8ynjR9mXdGHQ6p3PbJ6tGdKBYKUe/itLTAJpScie oORA== X-Gm-Message-State: APt69E3/aNusQbn3mgeaeQY07+TVo4AXztPzmpzx6ibPO7muHtQYG73D LrGgmOEa0O6yGBAosNpWPXNyRw== X-Google-Smtp-Source: ADUXVKL8U89SLv0Awp2061dybD1wr8iUOZdQVZle+56VHJQORQFbN9w2m/KZmIYBhpw1BjLaZsmZfQ== X-Received: by 2002:a19:a413:: with SMTP id q19-v6mr805053lfc.143.1528364668954; Thu, 07 Jun 2018 02:44:28 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.44.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:44:27 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com, Rafal Kozik Date: Thu, 7 Jun 2018 11:43:22 +0200 Message-Id: <20180607094322.14312-27-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 27/27] net/ena: set link speed as none X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rafal Kozik Link speed should is not limited to 10Gb/s and it shouldn't be hardcoded. They link speed is set to none instead and the applications shouldn't rely on this value when using ENA PMD. Fixes: 1173fca ("ena: add polling-mode driver") Signed-off-by: Rafal Kozik Acked-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 5c3b6494f..9ae73e331 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -848,7 +848,7 @@ static int ena_link_update(struct rte_eth_dev *dev, adapter = (struct ena_adapter *)(dev->data->dev_private); link->link_status = adapter->link_status ? ETH_LINK_UP : ETH_LINK_DOWN; - link->link_speed = ETH_SPEED_NUM_10G; + link->link_speed = ETH_SPEED_NUM_NONE; link->link_duplex = ETH_LINK_FULL_DUPLEX; return 0;