From patchwork Tue Jan 17 07:26:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 122158 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EA817423FA; Tue, 17 Jan 2023 08:53:19 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BEA3D42D96; Tue, 17 Jan 2023 08:51:33 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 4CEBD40151 for ; Tue, 17 Jan 2023 08:51:31 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673941891; x=1705477891; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dsaGAemIucXxmjSdVyDvgcZ2jKalZZ6mOoLQVAo0xOk=; b=ei7YXZmZi3Qs2sbOT9LQFsuwMkNCmpAiGmW2jwaVPYBvIHo5QWo5hwfY 7LNIMNGPa+zNGw1nd95kv49f1X0x3Nz88pdwk7koEd5IvTDX/qtGBjG1M 4Gml0dbas74+TIrDIaITAj3ZxG9nZQANYrIQ3ZAaFD7Csgc7gvXiFmtIS NSWTyXWcmQBSNzH0G3QyzGfT/LIHUKeuVPq0geGh2qYg7tQVOnvSwM28a xMsdsN7brEvLZVjtnPEFkd7Uk1zWqeFZT48PK+DMjU7sD6WSYEzaQxyxY xAgKwLX/FyteB0Q0MK4tXE/nBSmazrKZ+f36JU2+xFn3Iv0DEzRmkIjdA A==; X-IronPort-AV: E=McAfee;i="6500,9779,10592"; a="312497121" X-IronPort-AV: E=Sophos;i="5.97,222,1669104000"; d="scan'208";a="312497121" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jan 2023 23:51:30 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10592"; a="767174582" X-IronPort-AV: E=Sophos;i="5.97,222,1669104000"; d="scan'208";a="767174582" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by fmsmga002.fm.intel.com with ESMTP; 16 Jan 2023 23:51:28 -0800 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, qi.z.zhang@intel.com, Beilei Xing Subject: [PATCH v3 14/15] common/idpf: add vec queue setup Date: Tue, 17 Jan 2023 07:26:25 +0000 Message-Id: <20230117072626.93796-19-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230117072626.93796-1-beilei.xing@intel.com> References: <20230117072626.93796-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing Move vector queue setup for single queue model to common module. Signed-off-by: Beilei Xing --- drivers/common/idpf/idpf_common_rxtx.c | 57 ++++++++++++++++++++++++++ drivers/common/idpf/idpf_common_rxtx.h | 2 + drivers/common/idpf/version.map | 1 + drivers/net/idpf/idpf_rxtx.c | 57 -------------------------- drivers/net/idpf/idpf_rxtx.h | 1 - 5 files changed, 60 insertions(+), 58 deletions(-) diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c index 459057f20e..bc95fef6bc 100644 --- a/drivers/common/idpf/idpf_common_rxtx.c +++ b/drivers/common/idpf/idpf_common_rxtx.c @@ -1399,3 +1399,60 @@ idpf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts, return i; } + +static void __rte_cold +release_rxq_mbufs_vec(struct idpf_rx_queue *rxq) +{ + const uint16_t mask = rxq->nb_rx_desc - 1; + uint16_t i; + + if (rxq->sw_ring == NULL || rxq->rxrearm_nb >= rxq->nb_rx_desc) + return; + + /* free all mbufs that are valid in the ring */ + if (rxq->rxrearm_nb == 0) { + for (i = 0; i < rxq->nb_rx_desc; i++) { + if (rxq->sw_ring[i] != NULL) + rte_pktmbuf_free_seg(rxq->sw_ring[i]); + } + } else { + for (i = rxq->rx_tail; i != rxq->rxrearm_start; i = (i + 1) & mask) { + if (rxq->sw_ring[i] != NULL) + rte_pktmbuf_free_seg(rxq->sw_ring[i]); + } + } + + rxq->rxrearm_nb = rxq->nb_rx_desc; + + /* set all entries to NULL */ + memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc); +} + +static const struct idpf_rxq_ops def_singleq_rx_ops_vec = { + .release_mbufs = release_rxq_mbufs_vec, +}; + +static inline int +idpf_singleq_rx_vec_setup_default(struct idpf_rx_queue *rxq) +{ + uintptr_t p; + struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */ + + mb_def.nb_segs = 1; + mb_def.data_off = RTE_PKTMBUF_HEADROOM; + mb_def.port = rxq->port_id; + rte_mbuf_refcnt_set(&mb_def, 1); + + /* prevent compiler reordering: rearm_data covers previous fields */ + rte_compiler_barrier(); + p = (uintptr_t)&mb_def.rearm_data; + rxq->mbuf_initializer = *(uint64_t *)p; + return 0; +} + +int __rte_cold +idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq) +{ + rxq->ops = &def_singleq_rx_ops_vec; + return idpf_singleq_rx_vec_setup_default(rxq); +} diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h index 827f791505..74d6081638 100644 --- a/drivers/common/idpf/idpf_common_rxtx.h +++ b/drivers/common/idpf/idpf_common_rxtx.h @@ -252,5 +252,7 @@ uint16_t idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, __rte_internal uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +__rte_internal +int idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq); #endif /* _IDPF_COMMON_RXTX_H_ */ diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index 244c74c209..0f3f4aa758 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -32,6 +32,7 @@ INTERNAL { idpf_reset_split_tx_descq; idpf_rx_queue_release; idpf_singleq_recv_pkts; + idpf_singleq_rx_vec_setup; idpf_singleq_xmit_pkts; idpf_splitq_recv_pkts; idpf_splitq_xmit_pkts; diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c index 74bf207c05..6155531e69 100644 --- a/drivers/net/idpf/idpf_rxtx.c +++ b/drivers/net/idpf/idpf_rxtx.c @@ -743,63 +743,6 @@ idpf_stop_queues(struct rte_eth_dev *dev) } } -static void __rte_cold -release_rxq_mbufs_vec(struct idpf_rx_queue *rxq) -{ - const uint16_t mask = rxq->nb_rx_desc - 1; - uint16_t i; - - if (rxq->sw_ring == NULL || rxq->rxrearm_nb >= rxq->nb_rx_desc) - return; - - /* free all mbufs that are valid in the ring */ - if (rxq->rxrearm_nb == 0) { - for (i = 0; i < rxq->nb_rx_desc; i++) { - if (rxq->sw_ring[i] != NULL) - rte_pktmbuf_free_seg(rxq->sw_ring[i]); - } - } else { - for (i = rxq->rx_tail; i != rxq->rxrearm_start; i = (i + 1) & mask) { - if (rxq->sw_ring[i] != NULL) - rte_pktmbuf_free_seg(rxq->sw_ring[i]); - } - } - - rxq->rxrearm_nb = rxq->nb_rx_desc; - - /* set all entries to NULL */ - memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc); -} - -static const struct idpf_rxq_ops def_singleq_rx_ops_vec = { - .release_mbufs = release_rxq_mbufs_vec, -}; - -static inline int -idpf_singleq_rx_vec_setup_default(struct idpf_rx_queue *rxq) -{ - uintptr_t p; - struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */ - - mb_def.nb_segs = 1; - mb_def.data_off = RTE_PKTMBUF_HEADROOM; - mb_def.port = rxq->port_id; - rte_mbuf_refcnt_set(&mb_def, 1); - - /* prevent compiler reordering: rearm_data covers previous fields */ - rte_compiler_barrier(); - p = (uintptr_t)&mb_def.rearm_data; - rxq->mbuf_initializer = *(uint64_t *)p; - return 0; -} - -int __rte_cold -idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq) -{ - rxq->ops = &def_singleq_rx_ops_vec; - return idpf_singleq_rx_vec_setup_default(rxq); -} - void idpf_set_rx_function(struct rte_eth_dev *dev) { diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h index eab363c3e7..a985dc2cf5 100644 --- a/drivers/net/idpf/idpf_rxtx.h +++ b/drivers/net/idpf/idpf_rxtx.h @@ -44,7 +44,6 @@ void idpf_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid); int idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); -int idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq); int idpf_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id); int idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id); int idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);