From patchwork Tue Jan 16 11:52:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rafal Kozik X-Patchwork-Id: 33791 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9B6281B03C; Tue, 16 Jan 2018 12:53:05 +0100 (CET) Received: from mail-lf0-f67.google.com (mail-lf0-f67.google.com [209.85.215.67]) by dpdk.org (Postfix) with ESMTP id BF7F61B01D for ; Tue, 16 Jan 2018 12:53:02 +0100 (CET) Received: by mail-lf0-f67.google.com with SMTP id w23so17047791lfd.11 for ; Tue, 16 Jan 2018 03:53:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=cbWZScd7ku8xO0r3UhjV1NJ8rOD+XxqtN+HDKzFNBbU=; b=0EmeLg/jpe1N6kk6pS8dGwYWQQH1ODRuQNZG/MzoPHc2R8NRhEnYQwlulUXYYsFvpr b4lPUP06+g4jWNMy9liPwPiyguNAsn9+zcg/KM1PIPSY+tWSvYkQ1JLtLQ4i+xr5Azg4 xk+9qayqUiG6g/mNJfgtN6pUdUU++mS5e6MFvt6QEgXVcXjNXi//HwzCz3aoR4nWOR6u m823kSb/c2YVmbW9r1kYFIE9jxFSLAgg+VYdh7jPTLvm4hg2D6mks8l4T7VyZTnUTwcH WiYz2iEXmyjDsMfgxkGDVuOOkI2SbvA3KdjShwhL4thfJNq36uL7RM35HAy8IMuIr+bY vvnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=cbWZScd7ku8xO0r3UhjV1NJ8rOD+XxqtN+HDKzFNBbU=; b=p6FnRLUDKSfgu2mLx/5C5H23ExffWEbcKhyImjdRKnmv7Z6z/LeWOOhb685LI8/vvS MXw/e2QfqgQdHOhyL2oA6WjjrKMfbvfT16tnKovC7QewWxQPlgpJeSJrAyMabobeMTBm agRvNhnI+pvWH6E7YuvNvjUp/t+sDkrJO/CqeFctEY7Na1mtbOWD7nSH7FIb1bMUcyDw aLBkkCrSplfV0RW5+YJwewCm4bxFLii0uhs6UKYAK/eou8S3Sm+SxVq8QjH6WogBr82+ pQ2z2dEnbmL0VcFHjMdo8R0E5qYJXz/pbM0H3HM04AmH9wdwKgzRTMvfX0bUzohXuX4P Gd0g== X-Gm-Message-State: AKwxytfTNcd6DGi3z/A52P3P2VC0H0ckI8+9hde1kHP2nGscjm9bVbbk pDPqch2xxjvZpESQmqWVFugoLxQf3MYj2A== X-Google-Smtp-Source: ACJfBouEXVZEynVYSjPPmlaxaQ9Q1cHira3R1iKwnbk4LDFYWy9EI4IfloAHKEy0JYtAt2cMvduOEA== X-Received: by 10.25.77.212 with SMTP id a203mr11040010lfb.14.1516103582144; Tue, 16 Jan 2018 03:53:02 -0800 (PST) Received: from rafalkozik.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id u87sm353739lfk.8.2018.01.16.03.53.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 16 Jan 2018 03:53:01 -0800 (PST) From: Rafal Kozik To: dev@dpdk.org Cc: mw@semihalf.com, mk@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, matua@amazon.com, igorch@amazon.com, Rafal Kozik Date: Tue, 16 Jan 2018 12:52:42 +0100 Message-Id: <1516103563-9275-2-git-send-email-rk@semihalf.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1516103563-9275-1-git-send-email-rk@semihalf.com> References: <1516103563-9275-1-git-send-email-rk@semihalf.com> Subject: [dpdk-dev] [PATCH 1/2] net/ena: convert to new Tx offloads API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Ethdev Tx offloads API has changed since: commit cba7f53b717d ("ethdev: introduce Tx queue offloads API") This commit support the new Tx offloads API. Queue configuration is stored in ena_ring.offloads. During preparing mbufs for tx, offloads are allowed only if appropriate flags in this field are set. Signed-off-by: Rafal Kozik Reviewed-By: Shahaf Shuler --- drivers/net/ena/ena_ethdev.c | 73 +++++++++++++++++++++++++++++++++++--------- drivers/net/ena/ena_ethdev.h | 3 ++ 2 files changed, 61 insertions(+), 15 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 22db895..6473776 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -164,6 +164,14 @@ static const struct ena_stats ena_stats_ena_com_strings[] = { #define ENA_STATS_ARRAY_RX ARRAY_SIZE(ena_stats_rx_strings) #define ENA_STATS_ARRAY_ENA_COM ARRAY_SIZE(ena_stats_ena_com_strings) +#define QUEUE_OFFLOADS (DEV_TX_OFFLOAD_TCP_CKSUM |\ + DEV_TX_OFFLOAD_UDP_CKSUM |\ + DEV_TX_OFFLOAD_IPV4_CKSUM |\ + DEV_TX_OFFLOAD_TCP_TSO) +#define MBUF_OFFLOADS (PKT_TX_L4_MASK |\ + PKT_TX_IP_CKSUM |\ + PKT_TX_TCP_SEG) + /** Vendor ID used by Amazon devices */ #define PCI_VENDOR_ID_AMAZON 0x1D0F /** Amazon devices */ @@ -227,6 +235,8 @@ static int ena_rss_reta_query(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size); static int ena_get_sset_count(struct rte_eth_dev *dev, int sset); +static bool ena_are_tx_queue_offloads_allowed(struct ena_adapter *adapter, + uint64_t offloads); static const struct eth_dev_ops ena_dev_ops = { .dev_configure = ena_dev_configure, @@ -280,21 +290,24 @@ static inline void ena_rx_mbuf_prepare(struct rte_mbuf *mbuf, } static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf, - struct ena_com_tx_ctx *ena_tx_ctx) + struct ena_com_tx_ctx *ena_tx_ctx, + uint64_t queue_offloads) { struct ena_com_tx_meta *ena_meta = &ena_tx_ctx->ena_meta; - if (mbuf->ol_flags & - (PKT_TX_L4_MASK | PKT_TX_IP_CKSUM | PKT_TX_TCP_SEG)) { + if ((mbuf->ol_flags & MBUF_OFFLOADS) && + (queue_offloads & QUEUE_OFFLOADS)) { /* check if TSO is required */ - if (mbuf->ol_flags & PKT_TX_TCP_SEG) { + if ((mbuf->ol_flags & PKT_TX_TCP_SEG) && + (queue_offloads & DEV_TX_OFFLOAD_TCP_TSO)) { ena_tx_ctx->tso_enable = true; ena_meta->l4_hdr_len = GET_L4_HDR_LEN(mbuf); } /* check if L3 checksum is needed */ - if (mbuf->ol_flags & PKT_TX_IP_CKSUM) + if ((mbuf->ol_flags & PKT_TX_IP_CKSUM) && + (queue_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)) ena_tx_ctx->l3_csum_enable = true; if (mbuf->ol_flags & PKT_TX_IPV6) { @@ -310,19 +323,17 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf, } /* check if L4 checksum is needed */ - switch (mbuf->ol_flags & PKT_TX_L4_MASK) { - case PKT_TX_TCP_CKSUM: + if ((mbuf->ol_flags & PKT_TX_TCP_CKSUM) && + (queue_offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) { ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_TCP; ena_tx_ctx->l4_csum_enable = true; - break; - case PKT_TX_UDP_CKSUM: + } else if ((mbuf->ol_flags & PKT_TX_UDP_CKSUM) && + (queue_offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) { ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_UDP; ena_tx_ctx->l4_csum_enable = true; - break; - default: + } else { ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_UNKNOWN; ena_tx_ctx->l4_csum_enable = false; - break; } ena_meta->mss = mbuf->tso_segsz; @@ -945,7 +956,7 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, __rte_unused unsigned int socket_id, - __rte_unused const struct rte_eth_txconf *tx_conf) + const struct rte_eth_txconf *tx_conf) { struct ena_com_create_io_ctx ctx = /* policy set to _HOST just to satisfy icc compiler */ @@ -982,6 +993,11 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, return -EINVAL; } + if (!ena_are_tx_queue_offloads_allowed(adapter, tx_conf->offloads)) { + RTE_LOG(ERR, PMD, "Unsupported queue offloads\n"); + return -EINVAL; + } + ena_qid = ENA_IO_TXQ_IDX(queue_idx); ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_TX; @@ -1036,6 +1052,8 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, for (i = 0; i < txq->ring_size; i++) txq->empty_tx_reqs[i] = i; + txq->offloads = tx_conf->offloads; + /* Store pointer to this queue in upper layer */ txq->configured = 1; dev->data->tx_queues[queue_idx] = txq; @@ -1386,6 +1404,14 @@ static int ena_dev_configure(struct rte_eth_dev *dev) { struct ena_adapter *adapter = (struct ena_adapter *)(dev->data->dev_private); + uint64_t tx_offloads = dev->data->dev_conf.txmode.offloads; + + if ((tx_offloads & adapter->tx_supported_offloads) != tx_offloads) { + RTE_LOG(ERR, PMD, "Some Tx offloads are not supported " + "requested 0x%lx supported 0x%lx\n", + tx_offloads, adapter->tx_supported_offloads); + return -ENOTSUP; + } if (!(adapter->state == ENA_ADAPTER_STATE_INIT || adapter->state == ENA_ADAPTER_STATE_STOPPED)) { @@ -1407,6 +1433,7 @@ static int ena_dev_configure(struct rte_eth_dev *dev) break; } + adapter->tx_selected_offloads = tx_offloads; return 0; } @@ -1435,13 +1462,26 @@ static void ena_init_rings(struct ena_adapter *adapter) } } +static bool ena_are_tx_queue_offloads_allowed(struct ena_adapter *adapter, + uint64_t offloads) +{ + uint64_t port_offloads = adapter->tx_selected_offloads; + + /* Check if port supports all requested offloads. + * True if all offloads selected for queue are set for port. + */ + if ((offloads & port_offloads) != offloads) + return false; + return true; +} + static void ena_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { struct ena_adapter *adapter; struct ena_com_dev *ena_dev; struct ena_com_dev_get_features_ctx feat; - uint32_t rx_feat = 0, tx_feat = 0; + uint64_t rx_feat = 0, tx_feat = 0; int rc = 0; ena_assert_msg(dev->data != NULL, "Uninitialized device"); @@ -1490,6 +1530,7 @@ static void ena_infos_get(struct rte_eth_dev *dev, /* Inform framework about available features */ dev_info->rx_offload_capa = rx_feat; dev_info->tx_offload_capa = tx_feat; + dev_info->tx_queue_offload_capa = tx_feat; dev_info->min_rx_bufsize = ENA_MIN_FRAME_LEN; dev_info->max_rx_pktlen = adapter->max_mtu; @@ -1498,6 +1539,8 @@ static void ena_infos_get(struct rte_eth_dev *dev, dev_info->max_rx_queues = adapter->num_queues; dev_info->max_tx_queues = adapter->num_queues; dev_info->reta_size = ENA_RX_RSS_TABLE_SIZE; + + adapter->tx_supported_offloads = tx_feat; } static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, @@ -1714,7 +1757,7 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, } /* there's no else as we take advantage of memset zeroing */ /* Set TX offloads flags, if applicable */ - ena_tx_mbuf_prepare(mbuf, &ena_tx_ctx); + ena_tx_mbuf_prepare(mbuf, &ena_tx_ctx, tx_ring->offloads); if (unlikely(mbuf->ol_flags & (PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD))) diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index be8bc9f..3e72777 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -91,6 +91,7 @@ struct ena_ring { uint8_t tx_max_header_size; int configured; struct ena_adapter *adapter; + uint64_t offloads; } __rte_cache_aligned; enum ena_adapter_state { @@ -175,6 +176,8 @@ struct ena_adapter { struct ena_driver_stats *drv_stats; enum ena_adapter_state state; + uint64_t tx_supported_offloads; + uint64_t tx_selected_offloads; }; #endif /* _ENA_ETHDEV_H_ */