From patchwork Sun Dec 17 20:20:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuanyu Xue X-Patchwork-Id: 135243 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D6BF94371D; Sun, 17 Dec 2023 21:21:26 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A7FB74025C; Sun, 17 Dec 2023 21:21:24 +0100 (CET) Received: from mail-qk1-f174.google.com (mail-qk1-f174.google.com [209.85.222.174]) by mails.dpdk.org (Postfix) with ESMTP id DB18140042 for ; Sun, 17 Dec 2023 21:21:22 +0100 (CET) Received: by mail-qk1-f174.google.com with SMTP id af79cd13be357-77fbc9020deso171853985a.3 for ; Sun, 17 Dec 2023 12:21:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=uconn-edu.20230601.gappssmtp.com; s=20230601; t=1702844482; x=1703449282; darn=dpdk.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=mHTUH6NB3QTNsEWYKlcwcvSQiVEI4T0BCvU9GaH42kE=; b=K6WJfNCUkNWKVPE0IpjxfUuttQvmMVbaRGk6agKYD+0TkV2AygBDU3ep8achgQoxBB Ha+pHW4c704dsQlM3FZtaHDpFqU0LxgA2Q3wHi9bIZQGl0F9U1RuWhuCkI9wMdxh0oDJ rac8wRKKltmaJMZgaBNHhIm8rbkGARfoSIu49HqZSAxOBI8LXI1p+1G+pxn2vbONxa8g Y+ZJVfFNpCrEPKGk524PM9FYlGhP9eNwNApRZMkXZvXaHev2WHFRzFEyFCwNcLTgYrcY XS9V4SNTQhm3Vawu2eLvm2DRVcRhbI7HMtfsRD5PdWGMegN3BJywQ24ZSjtvkMXRwtoy JpQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702844482; x=1703449282; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=mHTUH6NB3QTNsEWYKlcwcvSQiVEI4T0BCvU9GaH42kE=; b=EryEY3K9KoXTEV0LuhbJi3GH7/sVijs+DP6PwQyiR51JBjkhvlndgJJq8LB3HPpJ95 HBCkdoZPwl1wjllm3wxUL7w3JeXJEv2slK6zFqtZd7ObV+ZKN+NzY3pBzzAq5e6EM54V WHXkKFmNd4YIu+3FsinT4obSMCek/nZcZ92xXonZ5jiuiXqx17Al/CJ992ORs005bQyN hw1WAhWeTRTHB9xh3TFsX0vF1ogOPQeyB3jNdcgs6MJPWU0c+QxsMNxJdwumFE9aIl18 /jn+eZmYDGysR1UJK4ICriyhmXMNhUm6L7PJyeBc31zRZq/HP6SfGnIpDXCX2uJIW3BB 1Hxw== X-Gm-Message-State: AOJu0YyeptmIB5l/lvegx9ZfS4BrxSBLyNRjs7hHUsKrqxMqd3qNnLby rhfQ3ANdqk24ELCQhRmDePu6gUfq7tjObYSDeCpH0pdy X-Google-Smtp-Source: AGHT+IHnH6QUfSfSBWFtq9Px0CQglZ3eStsgaMvrP0dSgOHzZNvVSijpt2KZJElqEwsgC1DNih8XpA== X-Received: by 2002:a05:620a:13c1:b0:77d:8be4:dca6 with SMTP id g1-20020a05620a13c100b0077d8be4dca6mr16842184qkl.42.1702844482053; Sun, 17 Dec 2023 12:21:22 -0800 (PST) Received: from localhost.localdomain ([137.99.252.108]) by smtp.gmail.com with ESMTPSA id 26-20020a05620a04da00b0077d7557653bsm7755920qks.64.2023.12.17.12.21.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 17 Dec 2023 12:21:21 -0800 (PST) From: Chuanyu Xue To: wenzhuo.lu@intel.com, qi.z.zhang@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, Chuanyu Xue Subject: [PATCH] net/e1000: support launchtime feature Date: Sun, 17 Dec 2023 15:20:40 -0500 Message-Id: <20231217202040.478959-1-chuanyu.xue@uconn.edu> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Enable the time-based scheduled Tx of packets based on the RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP flag. The launchtime defines the packet transmission time based on PTP clock at MAC layer, which should be set to the advanced transmit descriptor. Signed-off-by: Chuanyu Xue --- drivers/net/e1000/base/e1000_regs.h | 1 + drivers/net/e1000/e1000_ethdev.h | 3 ++ drivers/net/e1000/igb_ethdev.c | 28 ++++++++++++++++++ drivers/net/e1000/igb_rxtx.c | 44 ++++++++++++++++++++++++----- 4 files changed, 69 insertions(+), 7 deletions(-) diff --git a/drivers/net/e1000/base/e1000_regs.h b/drivers/net/e1000/base/e1000_regs.h index d44de59c29..092d9d71e6 100644 --- a/drivers/net/e1000/base/e1000_regs.h +++ b/drivers/net/e1000/base/e1000_regs.h @@ -162,6 +162,7 @@ /* QAV Tx mode control register */ #define E1000_I210_TQAVCTRL 0x3570 +#define E1000_I210_LAUNCH_OS0 0x3578 /* QAV Tx mode control register bitfields masks */ /* QAV enable */ diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h index 718a9746ed..174f7aaf52 100644 --- a/drivers/net/e1000/e1000_ethdev.h +++ b/drivers/net/e1000/e1000_ethdev.h @@ -382,6 +382,9 @@ extern struct igb_rss_filter_list igb_filter_rss_list; TAILQ_HEAD(igb_flow_mem_list, igb_flow_mem); extern struct igb_flow_mem_list igb_flow_list; +extern uint64_t igb_tx_timestamp_dynflag; +extern int igb_tx_timestamp_dynfield_offset; + extern const struct rte_flow_ops igb_flow_ops; /* diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c index 8858f975f8..4d3d8ae30a 100644 --- a/drivers/net/e1000/igb_ethdev.c +++ b/drivers/net/e1000/igb_ethdev.c @@ -223,6 +223,7 @@ static int igb_timesync_read_time(struct rte_eth_dev *dev, struct timespec *timestamp); static int igb_timesync_write_time(struct rte_eth_dev *dev, const struct timespec *timestamp); +static int eth_igb_read_clock(__rte_unused struct rte_eth_dev *dev, uint64_t *clock); static int eth_igb_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id); static int eth_igb_rx_queue_intr_disable(struct rte_eth_dev *dev, @@ -313,6 +314,9 @@ static const struct rte_pci_id pci_id_igbvf_map[] = { { .vendor_id = 0, /* sentinel */ }, }; +uint64_t igb_tx_timestamp_dynflag; +int igb_tx_timestamp_dynfield_offset = -1; + static const struct rte_eth_desc_lim rx_desc_lim = { .nb_max = E1000_MAX_RING_DESC, .nb_min = E1000_MIN_RING_DESC, @@ -389,6 +393,7 @@ static const struct eth_dev_ops eth_igb_ops = { .timesync_adjust_time = igb_timesync_adjust_time, .timesync_read_time = igb_timesync_read_time, .timesync_write_time = igb_timesync_write_time, + .read_clock = eth_igb_read_clock, }; /* @@ -1198,6 +1203,7 @@ eth_igb_start(struct rte_eth_dev *dev) struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int ret, mask; + uint32_t tqavctrl; uint32_t intr_vector = 0; uint32_t ctrl_ext; uint32_t *speeds; @@ -1281,6 +1287,15 @@ eth_igb_start(struct rte_eth_dev *dev) return ret; } + if (igb_tx_timestamp_dynflag > 0) { + tqavctrl = E1000_READ_REG(hw, E1000_I210_TQAVCTRL); + tqavctrl |= E1000_TQAVCTRL_MODE; + tqavctrl |= E1000_TQAVCTRL_FETCH_ARB; /* Fetch the queue most empty, no Round Robin*/ + tqavctrl |= E1000_TQAVCTRL_LAUNCH_TIMER_ENABLE; /* Enable launch time */ + E1000_WRITE_REG(hw, E1000_I210_TQAVCTRL, tqavctrl); + E1000_WRITE_REG(hw, E1000_I210_LAUNCH_OS0, 1ULL << 31); /* Set launch offset to default */ + } + e1000_clear_hw_cntrs_base_generic(hw); /* @@ -4882,6 +4897,19 @@ igb_timesync_read_tx_timestamp(struct rte_eth_dev *dev, return 0; } +static int +eth_igb_read_clock(__rte_unused struct rte_eth_dev *dev, uint64_t *clock) +{ + uint64_t systime_cycles; + struct e1000_adapter *adapter = dev->data->dev_private; + + systime_cycles = igb_read_systime_cyclecounter(dev); + uint64_t ns = rte_timecounter_update(&adapter->systime_tc, systime_cycles); + *clock = ns; + + return 0; +} + static int eth_igb_get_reg_length(struct rte_eth_dev *dev __rte_unused) { diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c index 448c4b7d9d..e5da8e250d 100644 --- a/drivers/net/e1000/igb_rxtx.c +++ b/drivers/net/e1000/igb_rxtx.c @@ -212,6 +212,9 @@ struct igb_tx_queue { #define IGB_TSO_MAX_HDRLEN (512) #define IGB_TSO_MAX_MSS (9216) +/* Macro to compensate latency in launch time offloading*/ +#define E1000_I210_LT_LATENCY 0x41F9 + /********************************************************************* * * TX function @@ -244,12 +247,13 @@ check_tso_para(uint64_t ol_req, union igb_tx_offload ol_para) static inline void igbe_set_xmit_ctx(struct igb_tx_queue* txq, volatile struct e1000_adv_tx_context_desc *ctx_txd, - uint64_t ol_flags, union igb_tx_offload tx_offload) + uint64_t ol_flags, union igb_tx_offload tx_offload, uint64_t txtime) { uint32_t type_tucmd_mlhl; uint32_t mss_l4len_idx; uint32_t ctx_idx, ctx_curr; uint32_t vlan_macip_lens; + uint32_t launch_time; union igb_tx_offload tx_offload_mask; ctx_curr = txq->ctx_curr; @@ -312,16 +316,25 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq, } } - txq->ctx_cache[ctx_curr].flags = ol_flags; - txq->ctx_cache[ctx_curr].tx_offload.data = - tx_offload_mask.data & tx_offload.data; - txq->ctx_cache[ctx_curr].tx_offload_mask = tx_offload_mask; + if (!txtime) { + txq->ctx_cache[ctx_curr].flags = ol_flags; + txq->ctx_cache[ctx_curr].tx_offload.data = + tx_offload_mask.data & tx_offload.data; + txq->ctx_cache[ctx_curr].tx_offload_mask = tx_offload_mask; + } ctx_txd->type_tucmd_mlhl = rte_cpu_to_le_32(type_tucmd_mlhl); vlan_macip_lens = (uint32_t)tx_offload.data; ctx_txd->vlan_macip_lens = rte_cpu_to_le_32(vlan_macip_lens); ctx_txd->mss_l4len_idx = rte_cpu_to_le_32(mss_l4len_idx); ctx_txd->u.seqnum_seed = 0; + + if (txtime) { + launch_time = (txtime - E1000_I210_LT_LATENCY) % NSEC_PER_SEC; + ctx_txd->u.launch_time = rte_cpu_to_le_32(launch_time / 32); + } else { + ctx_txd->u.launch_time = 0; + } } /* @@ -400,6 +413,7 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint32_t new_ctx = 0; uint32_t ctx = 0; union igb_tx_offload tx_offload = {0}; + uint64_t ts; txq = tx_queue; sw_ring = txq->sw_ring; @@ -552,7 +566,12 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, txe->mbuf = NULL; } - igbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req, tx_offload); + if (igb_tx_timestamp_dynflag > 0) { + ts = *RTE_MBUF_DYNFIELD(tx_pkt, igb_tx_timestamp_dynfield_offset, uint64_t *); + igbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req, tx_offload, ts); + } else { + igbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req, tx_offload, 0); + } txe->last_id = tx_last; tx_id = txe->next_id; @@ -1464,7 +1483,8 @@ igb_get_tx_port_offloads_capa(struct rte_eth_dev *dev) RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_TSO | - RTE_ETH_TX_OFFLOAD_MULTI_SEGS; + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP; return tx_offload_capa; } @@ -2579,9 +2599,11 @@ eth_igb_tx_init(struct rte_eth_dev *dev) { struct e1000_hw *hw; struct igb_tx_queue *txq; + uint64_t offloads = dev->data->dev_conf.txmode.offloads; uint32_t tctl; uint32_t txdctl; uint16_t i; + int err; hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -2612,6 +2634,14 @@ eth_igb_tx_init(struct rte_eth_dev *dev) dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED; } + if (offloads & RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP) { + err = rte_mbuf_dyn_tx_timestamp_register( + &igb_tx_timestamp_dynfield_offset, + &igb_tx_timestamp_dynflag); + if (err) + PMD_DRV_LOG(ERR, "Failed to register tx timestamp dynamic field"); + } + /* Program the Transmit Control Register. */ tctl = E1000_READ_REG(hw, E1000_TCTL); tctl &= ~E1000_TCTL_CT;