From patchwork Tue Jun 6 14:47:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 128207 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DB51842C40; Tue, 6 Jun 2023 16:54:53 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 14A23410F2; Tue, 6 Jun 2023 16:54:51 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id E644640223 for ; Tue, 6 Jun 2023 16:54:48 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686063289; x=1717599289; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HvZczovHqMftC1gXvyXISO6NzB2XXSQLZF/bT6MaCCw=; b=MkCzqlczw0mQY53q3yjp/bzAuhcN/yl//iQGHJ3pDsHugL+W2B3l5GAi zOt5Kcqo3UMxyGa9Nk+UwSFS8pw2nKPh+UgJVknYSwGqKYwrSlyU7H9t5 Dw/lqSE7CsGOvcZ760vAXhImRBEs6+HKIcAYXQ8hBfF2MwF6Y95MVH3iA FWl5Ksffoy5VCJr8dDG1FhH+IEdXgBJLnUcWali83rcoRuxfUm6Y7QLjr MbucgYAHNqKrRk0tfvshDMsfyBYkCdymYMhUpCAtxer2tAZXa/P41Lwi6 quGW3gKec9Ze3IQXIU0ntKBZRjxn3aaFarSXhUStQxWrmS4JkhcffwqSu Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="356704905" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="356704905" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2023 07:54:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="1039222271" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="1039222271" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.94]) by fmsmga005.fm.intel.com with ESMTP; 06 Jun 2023 07:54:45 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com, stephen@networkplumber.org, pbhagavatula@marvell.com, jerinjacobk@gmail.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, mattias.ronnblom@ericsson.com, Zhirun Yan Subject: [PATCH v8 01/17] graph: rename rte_graph_work as common Date: Tue, 6 Jun 2023 22:47:30 +0800 Message-Id: <20230606144746.708388-2-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230606144746.708388-1-zhirun.yan@intel.com> References: <20230605111923.3772260-1-zhirun.yan@intel.com> <20230606144746.708388-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Rename rte_graph_work.h to rte_graph_work_common.h for supporting multiple graph worker model. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- MAINTAINERS | 3 ++- lib/graph/graph_pcap.c | 2 +- lib/graph/graph_private.h | 2 +- lib/graph/meson.build | 2 +- lib/graph/{rte_graph_worker.h => rte_graph_worker_common.h} | 6 +++--- 5 files changed, 8 insertions(+), 7 deletions(-) rename lib/graph/{rte_graph_worker.h => rte_graph_worker_common.h} (99%) diff --git a/MAINTAINERS b/MAINTAINERS index 48830ae571..34ac499c14 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1716,10 +1716,11 @@ F: doc/guides/prog_guide/bpf_lib.rst Graph - EXPERIMENTAL M: Jerin Jacob M: Kiran Kumar K +M: Nithin Dabilpuram +M: Zhirun Yan F: lib/graph/ F: doc/guides/prog_guide/graph_lib.rst F: app/test/test_graph* -M: Nithin Dabilpuram F: examples/l3fwd-graph/ F: doc/guides/sample_app_ug/l3_forward_graph.rst diff --git a/lib/graph/graph_pcap.c b/lib/graph/graph_pcap.c index 6c43330029..8a220370fa 100644 --- a/lib/graph/graph_pcap.c +++ b/lib/graph/graph_pcap.c @@ -10,7 +10,7 @@ #include #include -#include "rte_graph_worker.h" +#include "rte_graph_worker_common.h" #include "graph_pcap_private.h" diff --git a/lib/graph/graph_private.h b/lib/graph/graph_private.h index eacdef45f0..307e5f70bc 100644 --- a/lib/graph/graph_private.h +++ b/lib/graph/graph_private.h @@ -13,7 +13,7 @@ #include #include "rte_graph.h" -#include "rte_graph_worker.h" +#include "rte_graph_worker_common.h" extern int rte_graph_logtype; diff --git a/lib/graph/meson.build b/lib/graph/meson.build index 3526d1b5d4..4e2b612ad3 100644 --- a/lib/graph/meson.build +++ b/lib/graph/meson.build @@ -16,6 +16,6 @@ sources = files( 'graph_populate.c', 'graph_pcap.c', ) -headers = files('rte_graph.h', 'rte_graph_worker.h') +headers = files('rte_graph.h', 'rte_graph_worker_common.h') deps += ['eal', 'pcapng'] diff --git a/lib/graph/rte_graph_worker.h b/lib/graph/rte_graph_worker_common.h similarity index 99% rename from lib/graph/rte_graph_worker.h rename to lib/graph/rte_graph_worker_common.h index 438595b15c..0bad2938f3 100644 --- a/lib/graph/rte_graph_worker.h +++ b/lib/graph/rte_graph_worker_common.h @@ -2,8 +2,8 @@ * Copyright(C) 2020 Marvell International Ltd. */ -#ifndef _RTE_GRAPH_WORKER_H_ -#define _RTE_GRAPH_WORKER_H_ +#ifndef _RTE_GRAPH_WORKER_COMMON_H_ +#define _RTE_GRAPH_WORKER_COMMON_H_ /** * @file rte_graph_worker.h @@ -518,4 +518,4 @@ rte_node_next_stream_move(struct rte_graph *graph, struct rte_node *src, } #endif -#endif /* _RTE_GRAPH_WORKER_H_ */ +#endif /* _RTE_GRAPH_WORKER_COIMMON_H_ */ From patchwork Tue Jun 6 14:47:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 128208 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 31D9B42C40; Tue, 6 Jun 2023 16:54:59 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 45D6742B8E; Tue, 6 Jun 2023 16:54:53 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id B449B411F3 for ; Tue, 6 Jun 2023 16:54:51 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686063291; x=1717599291; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nCdy8MQJAPQROCK1tFl/63aQavqnGyCsD52wCoDrE6Q=; b=JsENwvyHJdXPtdIbf12IEomOBr5BTXvKrJOpdF1+ZoEvBiCCWDdJYeri XtR1lyIrOi8dReiO3jXo87gGuEWzkQcl2znIqhaySxwKEqRlR6upnItth 2xazl4O4yM4F9zjbtbnMIxKCkBRbJ3NopMgzk+YagcfzN9KgEp/PaZu/F SZkcPEHrDZAbhIX3UZ4+Nn5ZL1Un9uys07bXn3hOKhQS2ojWur0jc+Xb3 h0FSZMLWm+ZkNVzOwQTIOOHzE9dgN0v70srIsY5cPeqWBLDkYj9wltvLF wNcCmGQIHFuYIpmuv+Oi0PsfDCqejKdGGRo8tFRRuP8fpHBqmBmSWBJa/ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="356704955" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="356704955" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2023 07:54:51 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="1039222316" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="1039222316" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.94]) by fmsmga005.fm.intel.com with ESMTP; 06 Jun 2023 07:54:48 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com, stephen@networkplumber.org, pbhagavatula@marvell.com, jerinjacobk@gmail.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, mattias.ronnblom@ericsson.com, Zhirun Yan Subject: [PATCH v8 02/17] graph: split graph worker into common and default model Date: Tue, 6 Jun 2023 22:47:31 +0800 Message-Id: <20230606144746.708388-3-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230606144746.708388-1-zhirun.yan@intel.com> References: <20230605111923.3772260-1-zhirun.yan@intel.com> <20230606144746.708388-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org To support multiple graph worker model, split graph into common and default. Naming the current walk function as rte_graph_model_rtc cause the default model is RTC(Run-to-completion). Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/graph_pcap.c | 2 +- lib/graph/graph_private.h | 2 +- lib/graph/meson.build | 2 +- lib/graph/rte_graph_model_rtc.h | 62 +++++++++++++++++++++++++++++ lib/graph/rte_graph_worker.h | 35 ++++++++++++++++ lib/graph/rte_graph_worker_common.h | 57 -------------------------- 6 files changed, 100 insertions(+), 60 deletions(-) create mode 100644 lib/graph/rte_graph_model_rtc.h create mode 100644 lib/graph/rte_graph_worker.h diff --git a/lib/graph/graph_pcap.c b/lib/graph/graph_pcap.c index 8a220370fa..6c43330029 100644 --- a/lib/graph/graph_pcap.c +++ b/lib/graph/graph_pcap.c @@ -10,7 +10,7 @@ #include #include -#include "rte_graph_worker_common.h" +#include "rte_graph_worker.h" #include "graph_pcap_private.h" diff --git a/lib/graph/graph_private.h b/lib/graph/graph_private.h index 307e5f70bc..eacdef45f0 100644 --- a/lib/graph/graph_private.h +++ b/lib/graph/graph_private.h @@ -13,7 +13,7 @@ #include #include "rte_graph.h" -#include "rte_graph_worker_common.h" +#include "rte_graph_worker.h" extern int rte_graph_logtype; diff --git a/lib/graph/meson.build b/lib/graph/meson.build index 4e2b612ad3..3526d1b5d4 100644 --- a/lib/graph/meson.build +++ b/lib/graph/meson.build @@ -16,6 +16,6 @@ sources = files( 'graph_populate.c', 'graph_pcap.c', ) -headers = files('rte_graph.h', 'rte_graph_worker_common.h') +headers = files('rte_graph.h', 'rte_graph_worker.h') deps += ['eal', 'pcapng'] diff --git a/lib/graph/rte_graph_model_rtc.h b/lib/graph/rte_graph_model_rtc.h new file mode 100644 index 0000000000..10b359772f --- /dev/null +++ b/lib/graph/rte_graph_model_rtc.h @@ -0,0 +1,62 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2020 Marvell International Ltd. + * Copyright(C) 2023 Intel Corporation + */ + +#include "rte_graph_worker_common.h" + +/** + * Perform graph walk on the circular buffer and invoke the process function + * of the nodes and collect the stats. + * + * @param graph + * Graph pointer returned from rte_graph_lookup function. + * + * @see rte_graph_lookup() + */ +static inline void +rte_graph_walk_rtc(struct rte_graph *graph) +{ + const rte_graph_off_t *cir_start = graph->cir_start; + const rte_node_t mask = graph->cir_mask; + uint32_t head = graph->head; + struct rte_node *node; + uint64_t start; + uint16_t rc; + void **objs; + + /* + * Walk on the source node(s) ((cir_start - head) -> cir_start) and then + * on the pending streams (cir_start -> (cir_start + mask) -> cir_start) + * in a circular buffer fashion. + * + * +-----+ <= cir_start - head [number of source nodes] + * | | + * | ... | <= source nodes + * | | + * +-----+ <= cir_start [head = 0] [tail = 0] + * | | + * | ... | <= pending streams + * | | + * +-----+ <= cir_start + mask + */ + while (likely(head != graph->tail)) { + node = (struct rte_node *)RTE_PTR_ADD(graph, cir_start[(int32_t)head++]); + RTE_ASSERT(node->fence == RTE_GRAPH_FENCE); + objs = node->objs; + rte_prefetch0(objs); + + if (rte_graph_has_stats_feature()) { + start = rte_rdtsc(); + rc = node->process(graph, node, objs, node->idx); + node->total_cycles += rte_rdtsc() - start; + node->total_calls++; + node->total_objs += rc; + } else { + node->process(graph, node, objs, node->idx); + } + node->idx = 0; + head = likely((int32_t)head > 0) ? head & mask : head; + } + graph->tail = 0; +} diff --git a/lib/graph/rte_graph_worker.h b/lib/graph/rte_graph_worker.h new file mode 100644 index 0000000000..5b58f7bda9 --- /dev/null +++ b/lib/graph/rte_graph_worker.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2020 Marvell International Ltd. + * Copyright(C) 2023 Intel Corporation + */ + +#ifndef _RTE_GRAPH_WORKER_H_ +#define _RTE_GRAPH_WORKER_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include "rte_graph_model_rtc.h" + +/** + * Perform graph walk on the circular buffer and invoke the process function + * of the nodes and collect the stats. + * + * @param graph + * Graph pointer returned from rte_graph_lookup function. + * + * @see rte_graph_lookup() + */ +__rte_experimental +static inline void +rte_graph_walk(struct rte_graph *graph) +{ + rte_graph_walk_rtc(graph); +} + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_GRAPH_WORKER_H_ */ diff --git a/lib/graph/rte_graph_worker_common.h b/lib/graph/rte_graph_worker_common.h index 0bad2938f3..b58f8f6947 100644 --- a/lib/graph/rte_graph_worker_common.h +++ b/lib/graph/rte_graph_worker_common.h @@ -128,63 +128,6 @@ __rte_experimental void __rte_node_stream_alloc_size(struct rte_graph *graph, struct rte_node *node, uint16_t req_size); -/** - * Perform graph walk on the circular buffer and invoke the process function - * of the nodes and collect the stats. - * - * @param graph - * Graph pointer returned from rte_graph_lookup function. - * - * @see rte_graph_lookup() - */ -__rte_experimental -static inline void -rte_graph_walk(struct rte_graph *graph) -{ - const rte_graph_off_t *cir_start = graph->cir_start; - const rte_node_t mask = graph->cir_mask; - uint32_t head = graph->head; - struct rte_node *node; - uint64_t start; - uint16_t rc; - void **objs; - - /* - * Walk on the source node(s) ((cir_start - head) -> cir_start) and then - * on the pending streams (cir_start -> (cir_start + mask) -> cir_start) - * in a circular buffer fashion. - * - * +-----+ <= cir_start - head [number of source nodes] - * | | - * | ... | <= source nodes - * | | - * +-----+ <= cir_start [head = 0] [tail = 0] - * | | - * | ... | <= pending streams - * | | - * +-----+ <= cir_start + mask - */ - while (likely(head != graph->tail)) { - node = (struct rte_node *)RTE_PTR_ADD(graph, cir_start[(int32_t)head++]); - RTE_ASSERT(node->fence == RTE_GRAPH_FENCE); - objs = node->objs; - rte_prefetch0(objs); - - if (rte_graph_has_stats_feature()) { - start = rte_rdtsc(); - rc = node->process(graph, node, objs, node->idx); - node->total_cycles += rte_rdtsc() - start; - node->total_calls++; - node->total_objs += rc; - } else { - node->process(graph, node, objs, node->idx); - } - node->idx = 0; - head = likely((int32_t)head > 0) ? head & mask : head; - } - graph->tail = 0; -} - /* Fast path helper functions */ /** From patchwork Tue Jun 6 14:47:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 128209 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AA6E642C40; Tue, 6 Jun 2023 16:55:04 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7E05C42D10; Tue, 6 Jun 2023 16:54:57 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 452C542D0B for ; Tue, 6 Jun 2023 16:54:55 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686063295; x=1717599295; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ru9P/3NtOG0gaqfzAm2TE0VpS0/70g/oVIIAmtVP1ao=; b=B23RJsXDZvuLuRzelavnX6m63I+mY0Nmg3aCIdS0QoVj4bvNL6updiOh JWOtiOzD/gQk6vVe6FFJ/B/+vrmbCvxjS3qR+PduQnKKIzElF+c4hf6wQ jt5FP7ATQhW5I+mcyi2LLW7MS+XELCKCTq/ima3cP9fUn6H1BL2K6NF7c gTcvDuju61oV24nuGHi0ZLZu5a11Drb9GRl2I7xeO1UlK61hCOD3003Qy C1Gmext4BH2TRzF6UTPA8J71N0jkL9Wyp0v2LnZXx/ewMyqPE3MLdJ9mL HFwsmVJP0FSGmKCcs/ztjyAvdEefPE2J/fhyaFCYoyKSx6qijENlRA531 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="356705022" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="356705022" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2023 07:54:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="1039222367" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="1039222367" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.94]) by fmsmga005.fm.intel.com with ESMTP; 06 Jun 2023 07:54:51 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com, stephen@networkplumber.org, pbhagavatula@marvell.com, jerinjacobk@gmail.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, mattias.ronnblom@ericsson.com, Zhirun Yan Subject: [PATCH v8 03/17] graph: move node process into inline function Date: Tue, 6 Jun 2023 22:47:32 +0800 Message-Id: <20230606144746.708388-4-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230606144746.708388-1-zhirun.yan@intel.com> References: <20230605111923.3772260-1-zhirun.yan@intel.com> <20230606144746.708388-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Node process is a single and reusable block, move the code into an inline function. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/rte_graph_model_rtc.h | 20 ++--------------- lib/graph/rte_graph_worker_common.h | 33 +++++++++++++++++++++++++++++ 2 files changed, 35 insertions(+), 18 deletions(-) diff --git a/lib/graph/rte_graph_model_rtc.h b/lib/graph/rte_graph_model_rtc.h index 10b359772f..4b6236e301 100644 --- a/lib/graph/rte_graph_model_rtc.h +++ b/lib/graph/rte_graph_model_rtc.h @@ -21,9 +21,6 @@ rte_graph_walk_rtc(struct rte_graph *graph) const rte_node_t mask = graph->cir_mask; uint32_t head = graph->head; struct rte_node *node; - uint64_t start; - uint16_t rc; - void **objs; /* * Walk on the source node(s) ((cir_start - head) -> cir_start) and then @@ -42,21 +39,8 @@ rte_graph_walk_rtc(struct rte_graph *graph) */ while (likely(head != graph->tail)) { node = (struct rte_node *)RTE_PTR_ADD(graph, cir_start[(int32_t)head++]); - RTE_ASSERT(node->fence == RTE_GRAPH_FENCE); - objs = node->objs; - rte_prefetch0(objs); - - if (rte_graph_has_stats_feature()) { - start = rte_rdtsc(); - rc = node->process(graph, node, objs, node->idx); - node->total_cycles += rte_rdtsc() - start; - node->total_calls++; - node->total_objs += rc; - } else { - node->process(graph, node, objs, node->idx); - } - node->idx = 0; - head = likely((int32_t)head > 0) ? head & mask : head; + __rte_node_process(graph, node); + head = likely((int32_t)head > 0) ? head & mask : head; } graph->tail = 0; } diff --git a/lib/graph/rte_graph_worker_common.h b/lib/graph/rte_graph_worker_common.h index b58f8f6947..41428974db 100644 --- a/lib/graph/rte_graph_worker_common.h +++ b/lib/graph/rte_graph_worker_common.h @@ -130,6 +130,39 @@ void __rte_node_stream_alloc_size(struct rte_graph *graph, /* Fast path helper functions */ +/** + * @internal + * + * Enqueue a given node to the tail of the graph reel. + * + * @param graph + * Pointer Graph object. + * @param node + * Pointer to node object to be enqueued. + */ +static __rte_always_inline void +__rte_node_process(struct rte_graph *graph, struct rte_node *node) +{ + uint64_t start; + uint16_t rc; + void **objs; + + RTE_ASSERT(node->fence == RTE_GRAPH_FENCE); + objs = node->objs; + rte_prefetch0(objs); + + if (rte_graph_has_stats_feature()) { + start = rte_rdtsc(); + rc = node->process(graph, node, objs, node->idx); + node->total_cycles += rte_rdtsc() - start; + node->total_calls++; + node->total_objs += rc; + } else { + node->process(graph, node, objs, node->idx); + } + node->idx = 0; +} + /** * @internal * From patchwork Tue Jun 6 14:47:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 128210 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A1A8E42C40; Tue, 6 Jun 2023 16:55:10 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B346B42D2C; Tue, 6 Jun 2023 16:55:02 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id E10D442D20 for ; Tue, 6 Jun 2023 16:54:59 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686063300; x=1717599300; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RP7fhQAtDrfeSkqotzaE+h/ZpaOREV75sEAYMlRmT2k=; b=FdLv7JmrI5w4XGkI3/CkLkzJ0BkygLlxGz8OTWHEYmz1y3rbCWethHGr R76h9d0QEoCbYR1tH+hv0abierCkjrYWsF7+aLPTn8Z4RsR2bAy/klGiF sgP7d/LsZ6Ora2XWyigTGXfhir9vy7fXfy0MC+UEBJ+8A+mqhCL3n2MK2 6BBhhfOe9QGiOxwjqmU/6sxHJZJiih5er8wL0kCYFf7Ts9Rxi4pC05Jnb IgBe0aSC/xvBimgOy/Em2sZpnX0SjqAzl/jlX+bgURidsJU2KMumivk7G PZePebmr5XEE+QCkmaKOSGmVQlUdjSivpc0OZKN9f6fHzQbL7P67ZcFtk w==; X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="356705091" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="356705091" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2023 07:54:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="1039222437" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="1039222437" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.94]) by fmsmga005.fm.intel.com with ESMTP; 06 Jun 2023 07:54:53 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com, stephen@networkplumber.org, pbhagavatula@marvell.com, jerinjacobk@gmail.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, mattias.ronnblom@ericsson.com, Zhirun Yan Subject: [PATCH v8 04/17] graph: add get/set graph worker model APIs Date: Tue, 6 Jun 2023 22:47:33 +0800 Message-Id: <20230606144746.708388-5-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230606144746.708388-1-zhirun.yan@intel.com> References: <20230605111923.3772260-1-zhirun.yan@intel.com> <20230606144746.708388-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add new get/set APIs to configure graph worker model which is used to determine which model will be chosen. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/meson.build | 1 + lib/graph/rte_graph_worker.c | 21 ++++++++++ lib/graph/rte_graph_worker_common.h | 61 +++++++++++++++++++++++++++++ lib/graph/version.map | 3 ++ 4 files changed, 86 insertions(+) create mode 100644 lib/graph/rte_graph_worker.c diff --git a/lib/graph/meson.build b/lib/graph/meson.build index 3526d1b5d4..9fab8243da 100644 --- a/lib/graph/meson.build +++ b/lib/graph/meson.build @@ -15,6 +15,7 @@ sources = files( 'graph_stats.c', 'graph_populate.c', 'graph_pcap.c', + 'rte_graph_worker.c', ) headers = files('rte_graph.h', 'rte_graph_worker.h') diff --git a/lib/graph/rte_graph_worker.c b/lib/graph/rte_graph_worker.c new file mode 100644 index 0000000000..3a4215f1a2 --- /dev/null +++ b/lib/graph/rte_graph_worker.c @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Intel Corporation + */ + +#include "rte_graph_worker_common.h" +#include "graph_private.h" + +int +rte_graph_worker_model_set(uint32_t model) +{ + struct graph_head *graph_head = graph_list_head_get(); + struct graph *graph; + + if (graph_model_is_valid(model)) + return -EINVAL; + + STAILQ_FOREACH(graph, graph_head, next) + graph->graph->model = model; + + return 0; +} diff --git a/lib/graph/rte_graph_worker_common.h b/lib/graph/rte_graph_worker_common.h index 41428974db..5dba3c0edd 100644 --- a/lib/graph/rte_graph_worker_common.h +++ b/lib/graph/rte_graph_worker_common.h @@ -29,6 +29,13 @@ extern "C" { #endif +/** Graph worker models */ +/* If adding new entry, then update graph_model_is_valid API. */ +#define RTE_GRAPH_MODEL_RTC 0 /**< Run-To-Completion model. It is the default model. */ +#define RTE_GRAPH_MODEL_DEFAULT RTE_GRAPH_MODEL_RTC /**< Default graph model. */ +#define RTE_GRAPH_MODEL_MCORE_DISPATCH 1 +/**< Dispatch model to support cross-core dispatching within core affinity. */ + /** * @internal * @@ -41,6 +48,7 @@ struct rte_graph { rte_node_t nb_nodes; /**< Number of nodes in the graph. */ rte_graph_off_t *cir_start; /**< Pointer to circular buffer. */ rte_graph_off_t nodes_start; /**< Offset at which node memory starts. */ + uint32_t model; /**< graph model */ rte_graph_t id; /**< Graph identifier. */ int socket; /**< Socket ID where memory is allocated. */ char name[RTE_GRAPH_NAMESIZE]; /**< Name of the graph. */ @@ -490,6 +498,59 @@ rte_node_next_stream_move(struct rte_graph *graph, struct rte_node *src, } } +/** + * Test the validity of model. + * + * @param id + * Node id to check. + * + * @return + * true if graph model is valid, false otherwise. + */ +static __rte_always_inline +bool +graph_model_is_valid(uint32_t model) +{ + if (model > RTE_GRAPH_MODEL_MCORE_DISPATCH) + return false; + + return true; +} + +/** + * @note This function does not perform any locking, and is only safe to call + * before graph running. It will set all graphs the same model. + * + * @param model + * Name of the graph worker model. + * + * @return + * 0 on success, -1 otherwise. + */ +__rte_experimental +int rte_graph_worker_model_set(uint32_t model); + +/** + * Get the graph worker model + * + * @note All graph will use the same model and this function will get model from the first one + * + * @param graph + * Graph pointer. + * + * @return + * Graph worker model on success. + */ +__rte_experimental +static inline uint32_t +rte_graph_worker_model_get(struct rte_graph *graph) +{ + if (!graph_model_is_valid(graph->model)) + return -EINVAL; + + return graph->model; +} + #ifdef __cplusplus } #endif diff --git a/lib/graph/version.map b/lib/graph/version.map index 13b838752d..eea73ec9ca 100644 --- a/lib/graph/version.map +++ b/lib/graph/version.map @@ -43,5 +43,8 @@ EXPERIMENTAL { rte_node_next_stream_put; rte_node_next_stream_move; + rte_graph_worker_model_set; + rte_graph_worker_model_get; + local: *; }; From patchwork Tue Jun 6 14:47:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 128211 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 11BBC42C40; Tue, 6 Jun 2023 16:55:19 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2BCF242D3E; Tue, 6 Jun 2023 16:55:05 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 7E11442D32 for ; Tue, 6 Jun 2023 16:55:03 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686063303; x=1717599303; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pleS+wrAZG1mCR5PoBXu+SX/mS0dmroqc6Z7TKbxibk=; b=gRNoV9LtCo4bQfU1jGbnC0SOh+aZ8FVJhQOPvNXIRz0NSczrnrSKdQR4 NnLszRlBNqnIefAhnrEjoGaUioApvLQqv6tRWyIagYIiJqk5B4n4DI9tx nsItjUkdhP1rdl1mqQD+S4SW2IDl8B7vN+ci0bwL7m9BhF0IXAX3oGYIa ATckLDWfnMeXhUnFhNRN0NYE2sYjAPrbwk11TVUxQFgjSbMCrirIV6tXU kqf96OEpImcVe1F7/e5a7wgHAztGOHmU0O8b2X79A5pBR4eYDH5bpbQda PGN3AXIxVCgRIXTw17Nbtor4Pm62hRcbOZOKJP9H/eTwCbOvuj7MnZDdx g==; X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="356705145" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="356705145" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2023 07:54:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="1039222498" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="1039222498" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.94]) by fmsmga005.fm.intel.com with ESMTP; 06 Jun 2023 07:54:56 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com, stephen@networkplumber.org, pbhagavatula@marvell.com, jerinjacobk@gmail.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, mattias.ronnblom@ericsson.com, Zhirun Yan Subject: [PATCH v8 05/17] graph: introduce graph node core affinity API Date: Tue, 6 Jun 2023 22:47:34 +0800 Message-Id: <20230606144746.708388-6-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230606144746.708388-1-zhirun.yan@intel.com> References: <20230605111923.3772260-1-zhirun.yan@intel.com> <20230606144746.708388-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add lcore_id for node to hold affinity core id and impl rte_graph_model_mcore_dispatch_lcore_affinity_set to set node affinity with specific lcore. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/graph_private.h | 2 + lib/graph/meson.build | 1 + lib/graph/node.c | 1 + lib/graph/rte_graph_model_mcore_dispatch.c | 30 +++++++++++++++ lib/graph/rte_graph_model_mcore_dispatch.h | 45 ++++++++++++++++++++++ lib/graph/version.map | 2 + 6 files changed, 81 insertions(+) create mode 100644 lib/graph/rte_graph_model_mcore_dispatch.c create mode 100644 lib/graph/rte_graph_model_mcore_dispatch.h diff --git a/lib/graph/graph_private.h b/lib/graph/graph_private.h index eacdef45f0..ea4409448d 100644 --- a/lib/graph/graph_private.h +++ b/lib/graph/graph_private.h @@ -51,6 +51,8 @@ struct node { STAILQ_ENTRY(node) next; /**< Next node in the list. */ char name[RTE_NODE_NAMESIZE]; /**< Name of the node. */ uint64_t flags; /**< Node configuration flag. */ + unsigned int lcore_id; + /**< Node runs on the Lcore ID used for mcore dispatch model. */ rte_node_process_t process; /**< Node process function. */ rte_node_init_t init; /**< Node init function. */ rte_node_fini_t fini; /**< Node fini function. */ diff --git a/lib/graph/meson.build b/lib/graph/meson.build index 9fab8243da..0685cf9e72 100644 --- a/lib/graph/meson.build +++ b/lib/graph/meson.build @@ -16,6 +16,7 @@ sources = files( 'graph_populate.c', 'graph_pcap.c', 'rte_graph_worker.c', + 'rte_graph_model_mcore_dispatch.c', ) headers = files('rte_graph.h', 'rte_graph_worker.h') diff --git a/lib/graph/node.c b/lib/graph/node.c index 149414dcd9..339b4a0da5 100644 --- a/lib/graph/node.c +++ b/lib/graph/node.c @@ -100,6 +100,7 @@ __rte_node_register(const struct rte_node_register *reg) goto free; } + node->lcore_id = RTE_MAX_LCORE; node->id = node_id++; /* Add the node at tail */ diff --git a/lib/graph/rte_graph_model_mcore_dispatch.c b/lib/graph/rte_graph_model_mcore_dispatch.c new file mode 100644 index 0000000000..9df2479a10 --- /dev/null +++ b/lib/graph/rte_graph_model_mcore_dispatch.c @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Intel Corporation + */ + +#include "graph_private.h" +#include "rte_graph_model_mcore_dispatch.h" + +int +rte_graph_model_mcore_dispatch_node_lcore_affinity_set(const char *name, unsigned int lcore_id) +{ + struct node *node; + int ret = -EINVAL; + + if (lcore_id >= RTE_MAX_LCORE) + return ret; + + graph_spinlock_lock(); + + STAILQ_FOREACH(node, node_list_head_get(), next) { + if (strncmp(node->name, name, RTE_NODE_NAMESIZE) == 0) { + node->lcore_id = lcore_id; + ret = 0; + break; + } + } + + graph_spinlock_unlock(); + + return ret; +} diff --git a/lib/graph/rte_graph_model_mcore_dispatch.h b/lib/graph/rte_graph_model_mcore_dispatch.h new file mode 100644 index 0000000000..7da0483d13 --- /dev/null +++ b/lib/graph/rte_graph_model_mcore_dispatch.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Intel Corporation + */ + +#ifndef _RTE_GRAPH_MODEL_MCORE_DISPATCH_H_ +#define _RTE_GRAPH_MODEL_MCORE_DISPATCH_H_ + +/** + * @file rte_graph_model_mcore_dispatch.h + * + * @warning + * @b EXPERIMENTAL: + * All functions in this file may be changed or removed without prior notice. + * + * These APIs allow to set core affinity with the node and only used for mcore + * dispatch model. + */ + +#ifdef __cplusplus +extern "C" { +#endif + +#include "rte_graph_worker_common.h" + +/** + * Set lcore affinity with the node used for mcore dispatch model. + * + * @param name + * Valid node name. In the case of the cloned node, the name will be + * "parent node name" + "-" + name. + * @param lcore_id + * The lcore ID value. + * + * @return + * 0 on success, error otherwise. + */ +__rte_experimental +int rte_graph_model_mcore_dispatch_node_lcore_affinity_set(const char *name, + unsigned int lcore_id); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_GRAPH_MODEL_MCORE_DISPATCH_H_ */ diff --git a/lib/graph/version.map b/lib/graph/version.map index eea73ec9ca..f39a65e902 100644 --- a/lib/graph/version.map +++ b/lib/graph/version.map @@ -46,5 +46,7 @@ EXPERIMENTAL { rte_graph_worker_model_set; rte_graph_worker_model_get; + rte_graph_model_mcore_dispatch_node_lcore_affinity_set; + local: *; }; From patchwork Tue Jun 6 14:47:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 128212 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EBB8142C40; Tue, 6 Jun 2023 16:55:24 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4CC6B42D32; Tue, 6 Jun 2023 16:55:12 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 2605442D32 for ; Tue, 6 Jun 2023 16:55:10 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686063311; x=1717599311; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SV3ha7I9nN4px+aKIzkbIxh/sFhH35idWd+xRafWeTA=; b=cBlQMN37rKaQtl/qSioYeUejj3LEXdORkHq+b9k2AwMKItPXAAwUE/6I 3e0aQpPFANEaYBOj1bqXRv2T7/dd1Rgg0B+Y339AeUT67+APMw2EiBacm gX+PmJoSa0mfe33Vb8Fi5jxuaR623PsFnNBLvWqZ3fxCDkO23AjalyCQv /bdLAUvf1IGTD5z3JIp5TQdOGEwUV6hHi+o+Ojlr4v3X/MLem8k7v5pus 30HolUxkuuco/JUBffJD+0nWqDnPi7UThlXrDsTAhFYSfYxWt1wPqnIUz e6slVcJiTHRJzfKCnHjqvkpEIjAC3lwgs5WKVZyg+8k6KHOhNI2MmZufu Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="356705207" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="356705207" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2023 07:55:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="1039222558" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="1039222558" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.94]) by fmsmga005.fm.intel.com with ESMTP; 06 Jun 2023 07:54:59 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com, stephen@networkplumber.org, pbhagavatula@marvell.com, jerinjacobk@gmail.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, mattias.ronnblom@ericsson.com, Zhirun Yan Subject: [PATCH v8 06/17] graph: introduce graph bind unbind API Date: Tue, 6 Jun 2023 22:47:35 +0800 Message-Id: <20230606144746.708388-7-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230606144746.708388-1-zhirun.yan@intel.com> References: <20230605111923.3772260-1-zhirun.yan@intel.com> <20230606144746.708388-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add lcore_id for graph to hold affinity core id where graph would run on. Add bind/unbind API to set/unset graph affinity attribute. lcore_id will be set as MAX by default, it means not enable this attribute. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/graph.c | 58 +++++++++++++++++++++++++++++++++++++++ lib/graph/graph_private.h | 2 ++ lib/graph/rte_graph.h | 22 +++++++++++++++ lib/graph/version.map | 2 ++ 4 files changed, 84 insertions(+) diff --git a/lib/graph/graph.c b/lib/graph/graph.c index 5582631b53..f8243fa61a 100644 --- a/lib/graph/graph.c +++ b/lib/graph/graph.c @@ -260,6 +260,63 @@ graph_mem_fixup_secondary(struct rte_graph *graph) return graph_mem_fixup_node_ctx(graph); } +static bool +graph_src_node_avail(struct graph *graph) +{ + struct graph_node *graph_node; + + STAILQ_FOREACH(graph_node, &graph->node_list, next) + if ((graph_node->node->flags & RTE_NODE_SOURCE_F) && + (graph_node->node->lcore_id == RTE_MAX_LCORE || + graph->lcore_id == graph_node->node->lcore_id)) + return true; + + return false; +} + +int +rte_graph_model_mcore_dispatch_core_bind(rte_graph_t id, int lcore) +{ + struct graph *graph; + + GRAPH_ID_CHECK(id); + if (!rte_lcore_is_enabled(lcore)) + SET_ERR_JMP(ENOLINK, fail, "lcore %d not enabled", lcore); + + STAILQ_FOREACH(graph, &graph_list, next) + if (graph->id == id) + break; + + RTE_ASSERT(graph->graph->model == RTE_GRAPH_MODEL_MCORE_DISPATCH); + graph->lcore_id = lcore; + graph->socket = rte_lcore_to_socket_id(lcore); + + /* check the availability of source node */ + if (!graph_src_node_avail(graph)) + graph->graph->head = 0; + + return 0; + +fail: + return -rte_errno; +} + +void +rte_graph_model_mcore_dispatch_core_unbind(rte_graph_t id) +{ + struct graph *graph; + + GRAPH_ID_CHECK(id); + STAILQ_FOREACH(graph, &graph_list, next) + if (graph->id == id) + break; + + graph->lcore_id = RTE_MAX_LCORE; + +fail: + return; +} + struct rte_graph * rte_graph_lookup(const char *name) { @@ -346,6 +403,7 @@ rte_graph_create(const char *name, struct rte_graph_param *prm) graph->src_node_count = src_node_count; graph->node_count = graph_nodes_count(graph); graph->id = graph_id; + graph->lcore_id = RTE_MAX_LCORE; graph->num_pkt_to_capture = prm->num_pkt_to_capture; if (prm->pcap_filename) rte_strscpy(graph->pcap_filename, prm->pcap_filename, RTE_GRAPH_PCAP_FILE_SZ); diff --git a/lib/graph/graph_private.h b/lib/graph/graph_private.h index ea4409448d..6d2137c81b 100644 --- a/lib/graph/graph_private.h +++ b/lib/graph/graph_private.h @@ -100,6 +100,8 @@ struct graph { /**< Circular buffer mask for wrap around. */ rte_graph_t id; /**< Graph identifier. */ + unsigned int lcore_id; + /**< Lcore identifier where the graph prefer to run on. Used for mcore dispatch model. */ size_t mem_sz; /**< Memory size of the graph. */ int socket; diff --git a/lib/graph/rte_graph.h b/lib/graph/rte_graph.h index c9a77297fc..f70c694e77 100644 --- a/lib/graph/rte_graph.h +++ b/lib/graph/rte_graph.h @@ -285,6 +285,28 @@ char *rte_graph_id_to_name(rte_graph_t id); __rte_experimental int rte_graph_export(const char *name, FILE *f); +/** + * Bind graph with specific lcore for mcore dispatch model. + * + * @param id + * Graph id to get the pointer of graph object + * @param lcore + * The lcore where the graph will run on + * @return + * 0 on success, error otherwise. + */ +__rte_experimental +int rte_graph_model_mcore_dispatch_core_bind(rte_graph_t id, int lcore); + +/** + * Unbind graph with lcore for mcore dispatch model + * + * @param id + * Graph id to get the pointer of graph object + */ +__rte_experimental +void rte_graph_model_mcore_dispatch_core_unbind(rte_graph_t id); + /** * Get graph object from its name. * diff --git a/lib/graph/version.map b/lib/graph/version.map index f39a65e902..132e666b79 100644 --- a/lib/graph/version.map +++ b/lib/graph/version.map @@ -18,6 +18,8 @@ EXPERIMENTAL { rte_graph_node_get_by_name; rte_graph_obj_dump; rte_graph_walk; + rte_graph_model_mcore_dispatch_core_bind; + rte_graph_model_mcore_dispatch_core_unbind; rte_graph_cluster_stats_create; rte_graph_cluster_stats_destroy; From patchwork Tue Jun 6 14:47:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 128213 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D267542C40; Tue, 6 Jun 2023 16:55:30 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 654B642D43; Tue, 6 Jun 2023 16:55:14 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id D91B742D3A for ; Tue, 6 Jun 2023 16:55:12 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686063313; x=1717599313; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WCFvBmK+rTDHcSHbPyRP1RLErZOEt4Nj/q9/+o2jH3s=; b=DjXw6bZ2PibCS2LXaXUF5Gn4JTkW8F2dHVZo/hVDn948wVk6zocXWZJk SRbD96drTWROmWoIIFoYtZury4O7IJeQlKgm1/tQwC3udE3qYUgBnlxZ+ 8DKzXyQpFzBXSA80n/C3/YI3NA2YL82UnV120ejl5XbXJUzkR6iZFEOSH Mbke4cwqcvNOno4iMuJQmmqdv20+UUsewuXgko2IH7UOvio3mNi6SCqFC yRnekNeDTMJxKzH+oKDfnLdqMQf7aX6Tdms1tXu5vPwaaZUntyRJvQD+t pbbyW7XpuUliArR/JwYhv6KXRn0sZF5BtDdgTz92wFZE6k/HhLxKfaMcl Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="356705267" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="356705267" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2023 07:55:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="1039222624" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="1039222624" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.94]) by fmsmga005.fm.intel.com with ESMTP; 06 Jun 2023 07:55:02 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com, stephen@networkplumber.org, pbhagavatula@marvell.com, jerinjacobk@gmail.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, mattias.ronnblom@ericsson.com, Zhirun Yan Subject: [PATCH v8 07/17] graph: move node clone name func into private as common Date: Tue, 6 Jun 2023 22:47:36 +0800 Message-Id: <20230606144746.708388-8-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230606144746.708388-1-zhirun.yan@intel.com> References: <20230605111923.3772260-1-zhirun.yan@intel.com> <20230606144746.708388-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Move clone_name() into graph_private.h as a common function for both node and graph to naming a new cloned object. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/graph_private.h | 41 +++++++++++++++++++++++++++++++++++++++ lib/graph/node.c | 26 +------------------------ 2 files changed, 42 insertions(+), 25 deletions(-) diff --git a/lib/graph/graph_private.h b/lib/graph/graph_private.h index 6d2137c81b..a6d8c6e98b 100644 --- a/lib/graph/graph_private.h +++ b/lib/graph/graph_private.h @@ -11,6 +11,8 @@ #include #include #include +#include +#include #include "rte_graph.h" #include "rte_graph_worker.h" @@ -114,6 +116,45 @@ struct graph { /**< Nodes in a graph. */ }; +/* Node and graph common functions */ +/** + * @internal + * + * Naming a cloned graph or node by appending a string to base name. + * + * @param new_name + * Pointer to the name of the cloned object. + * @param base_name + * Pointer to the name of original object. + * @param append_str + * Pointer to the appended string. + * + * @return + * 0 on success, negative errno value otherwise. + */ +static inline int clone_name(char *new_name, char *base_name, const char *append_str) +{ + ssize_t sz, rc; + +#define SZ RTE_MIN(RTE_NODE_NAMESIZE, RTE_GRAPH_NAMESIZE) + rc = rte_strscpy(new_name, base_name, SZ); + if (rc < 0) + goto fail; + sz = rc; + rc = rte_strscpy(new_name + sz, "-", RTE_MAX((int16_t)(SZ - sz), 0)); + if (rc < 0) + goto fail; + sz += rc; + sz = rte_strscpy(new_name + sz, append_str, RTE_MAX((int16_t)(SZ - sz), 0)); + if (sz < 0) + goto fail; + + return 0; +fail: + rte_errno = E2BIG; + return -rte_errno; +} + /* Node functions */ STAILQ_HEAD(node_head, node); diff --git a/lib/graph/node.c b/lib/graph/node.c index 339b4a0da5..99a9622779 100644 --- a/lib/graph/node.c +++ b/lib/graph/node.c @@ -115,30 +115,6 @@ __rte_node_register(const struct rte_node_register *reg) return RTE_NODE_ID_INVALID; } -static int -clone_name(struct rte_node_register *reg, struct node *node, const char *name) -{ - ssize_t sz, rc; - -#define SZ RTE_NODE_NAMESIZE - rc = rte_strscpy(reg->name, node->name, SZ); - if (rc < 0) - goto fail; - sz = rc; - rc = rte_strscpy(reg->name + sz, "-", RTE_MAX((int16_t)(SZ - sz), 0)); - if (rc < 0) - goto fail; - sz += rc; - sz = rte_strscpy(reg->name + sz, name, RTE_MAX((int16_t)(SZ - sz), 0)); - if (sz < 0) - goto fail; - - return 0; -fail: - rte_errno = E2BIG; - return -rte_errno; -} - static rte_node_t node_clone(struct node *node, const char *name) { @@ -170,7 +146,7 @@ node_clone(struct node *node, const char *name) reg->next_nodes[i] = node->next_nodes[i]; /* Naming ceremony of the new node. name is node->name + "-" + name */ - if (clone_name(reg, node, name)) + if (clone_name(reg->name, node->name, name)) goto free; rc = __rte_node_register(reg); From patchwork Tue Jun 6 14:47:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 128214 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A16DD42C40; Tue, 6 Jun 2023 16:55:36 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A35FD410D7; Tue, 6 Jun 2023 16:55:19 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 5EDEB40223 for ; Tue, 6 Jun 2023 16:55:17 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686063317; x=1717599317; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bjzZXdhOIWivGSccAcjMm8BBQEP4qdcqII0IpSkXr6Y=; b=nHy0O7w4JvNXqteHl32YvzfMM0hQAC+gmCqV5pGRRpPLU8naOkOgceyh wlPD8SjbbAQZNJls2LRKwVRPIPgHKZD9izI66k1Mw3c9oigF5BXrTDpoW l3cJDbiI8PfVOBaoTC293ImZ7JcIEHWUqHVTbc3BDkIQFEi7OvgNN5RqG RASYyfXqmbv0qIiuyor9jHZ0VMpVLnYpXkbDrlHcx2O1FYiyDm2aG9mQF MMtRyBWc9arOs/ZLollL1i/dE6eaCwCJEo+Rp6lbU+sMVQTlh5gW5/Qnp wOZkf8chGkHWr+S6q1ZECSZbxCDOCprZI41dgsLcv+hoG5cLy9IPKW58J g==; X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="356705313" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="356705313" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2023 07:55:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="1039222674" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="1039222674" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.94]) by fmsmga005.fm.intel.com with ESMTP; 06 Jun 2023 07:55:04 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com, stephen@networkplumber.org, pbhagavatula@marvell.com, jerinjacobk@gmail.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, mattias.ronnblom@ericsson.com, Zhirun Yan Subject: [PATCH v8 08/17] graph: introduce graph clone API for other worker core Date: Tue, 6 Jun 2023 22:47:37 +0800 Message-Id: <20230606144746.708388-9-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230606144746.708388-1-zhirun.yan@intel.com> References: <20230605111923.3772260-1-zhirun.yan@intel.com> <20230606144746.708388-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds graph API for supporting to clone the graph object for a specified worker core. The new graph will also clone all nodes. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/graph.c | 89 +++++++++++++++++++++++++++++++++++++++ lib/graph/graph_private.h | 2 + lib/graph/rte_graph.h | 20 +++++++++ lib/graph/version.map | 1 + 4 files changed, 112 insertions(+) diff --git a/lib/graph/graph.c b/lib/graph/graph.c index f8243fa61a..84e01d11d0 100644 --- a/lib/graph/graph.c +++ b/lib/graph/graph.c @@ -403,6 +403,7 @@ rte_graph_create(const char *name, struct rte_graph_param *prm) graph->src_node_count = src_node_count; graph->node_count = graph_nodes_count(graph); graph->id = graph_id; + graph->parent_id = RTE_GRAPH_ID_INVALID; graph->lcore_id = RTE_MAX_LCORE; graph->num_pkt_to_capture = prm->num_pkt_to_capture; if (prm->pcap_filename) @@ -467,6 +468,94 @@ rte_graph_destroy(rte_graph_t id) return rc; } +static rte_graph_t +graph_clone(struct graph *parent_graph, const char *name) +{ + struct graph_node *graph_node; + struct graph *graph; + + graph_spinlock_lock(); + + /* Don't allow to clone a node from a cloned graph */ + if (parent_graph->parent_id != RTE_GRAPH_ID_INVALID) + SET_ERR_JMP(EEXIST, fail, "A cloned graph is not allowed to be cloned"); + + /* Create graph object */ + graph = calloc(1, sizeof(*graph)); + if (graph == NULL) + SET_ERR_JMP(ENOMEM, fail, "Failed to calloc cloned graph object"); + + /* Naming ceremony of the new graph. name is node->name + "-" + name */ + if (clone_name(graph->name, parent_graph->name, name)) + goto free; + + /* Check for existence of duplicate graph */ + if (rte_graph_from_name(graph->name) != RTE_GRAPH_ID_INVALID) + SET_ERR_JMP(EEXIST, free, "Found duplicate graph %s", + graph->name); + + /* Clone nodes from parent graph firstly */ + STAILQ_INIT(&graph->node_list); + STAILQ_FOREACH(graph_node, &parent_graph->node_list, next) { + if (graph_node_add(graph, graph_node->node)) + goto graph_cleanup; + } + + /* Just update adjacency list of all nodes in the graph */ + if (graph_adjacency_list_update(graph)) + goto graph_cleanup; + + /* Initialize the graph object */ + graph->src_node_count = parent_graph->src_node_count; + graph->node_count = parent_graph->node_count; + graph->parent_id = parent_graph->id; + graph->lcore_id = parent_graph->lcore_id; + graph->socket = parent_graph->socket; + graph->id = graph_id; + + /* Allocate the Graph fast path memory and populate the data */ + if (graph_fp_mem_create(graph)) + goto graph_cleanup; + + /* Clone the graph model */ + graph->graph->model = parent_graph->graph->model; + + /* Call init() of the all the nodes in the graph */ + if (graph_node_init(graph)) + goto graph_mem_destroy; + + /* All good, Lets add the graph to the list */ + graph_id++; + STAILQ_INSERT_TAIL(&graph_list, graph, next); + + graph_spinlock_unlock(); + return graph->id; + +graph_mem_destroy: + graph_fp_mem_destroy(graph); +graph_cleanup: + graph_cleanup(graph); +free: + free(graph); +fail: + graph_spinlock_unlock(); + return RTE_GRAPH_ID_INVALID; +} + +rte_graph_t +rte_graph_clone(rte_graph_t id, const char *name) +{ + struct graph *graph; + + GRAPH_ID_CHECK(id); + STAILQ_FOREACH(graph, &graph_list, next) + if (graph->id == id) + return graph_clone(graph, name); + +fail: + return RTE_GRAPH_ID_INVALID; +} + rte_graph_t rte_graph_from_name(const char *name) { diff --git a/lib/graph/graph_private.h b/lib/graph/graph_private.h index a6d8c6e98b..354dc8ac0a 100644 --- a/lib/graph/graph_private.h +++ b/lib/graph/graph_private.h @@ -102,6 +102,8 @@ struct graph { /**< Circular buffer mask for wrap around. */ rte_graph_t id; /**< Graph identifier. */ + rte_graph_t parent_id; + /**< Parent graph identifier. */ unsigned int lcore_id; /**< Lcore identifier where the graph prefer to run on. Used for mcore dispatch model. */ size_t mem_sz; diff --git a/lib/graph/rte_graph.h b/lib/graph/rte_graph.h index f70c694e77..998cade200 100644 --- a/lib/graph/rte_graph.h +++ b/lib/graph/rte_graph.h @@ -247,6 +247,26 @@ rte_graph_t rte_graph_create(const char *name, struct rte_graph_param *prm); __rte_experimental int rte_graph_destroy(rte_graph_t id); +/** + * Clone Graph. + * + * Clone a graph from static graph (graph created from rte_graph_create()). And + * all cloned graphs attached to the parent graph MUST be destroyed together + * for fast schedule design limitation (stop ALL graph walk firstly). + * + * @param id + * Static graph id to clone from. + * @param name + * Name of the new graph. The library prepends the parent graph name to the + * user-specified name. The final graph name will be, + * "parent graph name" + "-" + name. + * + * @return + * Valid graph id on success, RTE_GRAPH_ID_INVALID otherwise. + */ +__rte_experimental +rte_graph_t rte_graph_clone(rte_graph_t id, const char *name); + /** * Get graph id from graph name. * diff --git a/lib/graph/version.map b/lib/graph/version.map index 132e666b79..eccecc8767 100644 --- a/lib/graph/version.map +++ b/lib/graph/version.map @@ -7,6 +7,7 @@ EXPERIMENTAL { rte_graph_create; rte_graph_destroy; + rte_graph_clone; rte_graph_dump; rte_graph_export; rte_graph_from_name; From patchwork Tue Jun 6 14:47:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 128215 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2772842C40; Tue, 6 Jun 2023 16:55:43 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CF32542D4F; Tue, 6 Jun 2023 16:55:23 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id D55BF40ED6 for ; Tue, 6 Jun 2023 16:55:21 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686063322; x=1717599322; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RDGTOqIp59nln+TXHODTWAD7ZO8gdwrWyHlQiwA66OE=; b=eUiezQgdkGAODcXA4uvlLoYzWSn8hcq8IDKPQtdzn091iKqXnvyS2vpN Z4zBhsWjTctwnB5vUCYiew/P65NVmjysLwXf2qTr+KeUyu/tASf/6ytG4 m9LZP8qsGuzg5ajUoOVO4zWni98cfV34Ok/XDOswO6KaEWEO+Xd4VFu8G mH1dQJCu2V39CjAk3A24I/yQ45PtTjPwwbyxCT6fpR6oe4SWETHVe6lbC EMF7kSgM6Lccfh43XzzXMrNGRYip6lknQNmclJ2BfgsXKb6U7O39W3uY5 zeGBv7XIaXd3JaY8zof8EHeEorY3yw70zocem9NVVE5/a5oRTRWTr4XOp A==; X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="356705372" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="356705372" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2023 07:55:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="1039222717" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="1039222717" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.94]) by fmsmga005.fm.intel.com with ESMTP; 06 Jun 2023 07:55:08 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com, stephen@networkplumber.org, pbhagavatula@marvell.com, jerinjacobk@gmail.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, mattias.ronnblom@ericsson.com, Zhirun Yan Subject: [PATCH v8 09/17] graph: add structure for stream moving between cores Date: Tue, 6 Jun 2023 22:47:38 +0800 Message-Id: <20230606144746.708388-10-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230606144746.708388-1-zhirun.yan@intel.com> References: <20230605111923.3772260-1-zhirun.yan@intel.com> <20230606144746.708388-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add graph_mcore_dispatch_wq_node to hold graph scheduling workqueue node. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/graph.c | 2 ++ lib/graph/graph_populate.c | 1 + lib/graph/graph_private.h | 12 ++++++++++++ lib/graph/rte_graph_worker_common.h | 29 +++++++++++++++++++++++++++++ 4 files changed, 44 insertions(+) diff --git a/lib/graph/graph.c b/lib/graph/graph.c index 84e01d11d0..1fee835804 100644 --- a/lib/graph/graph.c +++ b/lib/graph/graph.c @@ -289,6 +289,7 @@ rte_graph_model_mcore_dispatch_core_bind(rte_graph_t id, int lcore) RTE_ASSERT(graph->graph->model == RTE_GRAPH_MODEL_MCORE_DISPATCH); graph->lcore_id = lcore; + graph->graph->dispatch.lcore_id = graph->lcore_id; graph->socket = rte_lcore_to_socket_id(lcore); /* check the availability of source node */ @@ -312,6 +313,7 @@ rte_graph_model_mcore_dispatch_core_unbind(rte_graph_t id) break; graph->lcore_id = RTE_MAX_LCORE; + graph->graph->dispatch.lcore_id = RTE_MAX_LCORE; fail: return; diff --git a/lib/graph/graph_populate.c b/lib/graph/graph_populate.c index 2c0844ce92..ed596a7711 100644 --- a/lib/graph/graph_populate.c +++ b/lib/graph/graph_populate.c @@ -89,6 +89,7 @@ graph_nodes_populate(struct graph *_graph) } node->id = graph_node->node->id; node->parent_id = pid; + node->dispatch.lcore_id = graph_node->node->lcore_id; nb_edges = graph_node->node->nb_edges; node->nb_edges = nb_edges; off += sizeof(struct rte_node); diff --git a/lib/graph/graph_private.h b/lib/graph/graph_private.h index 354dc8ac0a..d84174b667 100644 --- a/lib/graph/graph_private.h +++ b/lib/graph/graph_private.h @@ -64,6 +64,18 @@ struct node { char next_nodes[][RTE_NODE_NAMESIZE]; /**< Names of next nodes. */ }; +/** + * @internal + * + * Structure that holds the graph scheduling workqueue node stream. + * Used for mcore dispatch model. + */ +struct graph_mcore_dispatch_wq_node { + rte_graph_off_t node_off; + uint16_t nb_objs; + void *objs[RTE_GRAPH_BURST_SIZE]; +} __rte_cache_aligned; + /** * @internal * diff --git a/lib/graph/rte_graph_worker_common.h b/lib/graph/rte_graph_worker_common.h index 5dba3c0edd..a9a72a6005 100644 --- a/lib/graph/rte_graph_worker_common.h +++ b/lib/graph/rte_graph_worker_common.h @@ -36,12 +36,20 @@ extern "C" { #define RTE_GRAPH_MODEL_MCORE_DISPATCH 1 /**< Dispatch model to support cross-core dispatching within core affinity. */ +/** + * @internal + * + * Singly-linked list head for graph schedule run-queue. + */ +SLIST_HEAD(rte_graph_rq_head, rte_graph); + /** * @internal * * Data structure to hold graph data. */ struct rte_graph { + /* Fast path area. */ uint32_t tail; /**< Tail of circular buffer. */ uint32_t head; /**< Head of circular buffer. */ uint32_t cir_mask; /**< Circular buffer wrap around mask. */ @@ -49,6 +57,20 @@ struct rte_graph { rte_graph_off_t *cir_start; /**< Pointer to circular buffer. */ rte_graph_off_t nodes_start; /**< Offset at which node memory starts. */ uint32_t model; /**< graph model */ + RTE_STD_C11 + union { + /* Fast schedule area for mcore dispatch model */ + struct { + struct rte_graph_rq_head *rq __rte_cache_aligned; /* The run-queue */ + struct rte_graph_rq_head rq_head; /* The head for run-queue list */ + + SLIST_ENTRY(rte_graph) rq_next; /* The next for run-queue list */ + unsigned int lcore_id; /**< The graph running Lcore. */ + struct rte_ring *wq; /**< The work-queue for pending streams. */ + struct rte_mempool *mp; /**< The mempool for scheduling streams. */ + } dispatch; /** Only used by dispatch model */ + }; + /* End of Fast path area.*/ rte_graph_t id; /**< Graph identifier. */ int socket; /**< Socket ID where memory is allocated. */ char name[RTE_GRAPH_NAMESIZE]; /**< Name of the graph. */ @@ -81,6 +103,13 @@ struct rte_node { /** Original process function when pcap is enabled. */ rte_node_process_t original_process; + RTE_STD_C11 + union { + /* Fast schedule area for mcore dispatch model */ + struct { + unsigned int lcore_id; /**< Node running lcore. */ + } dispatch; + }; /* Fast path area */ #define RTE_NODE_CTX_SZ 16 uint8_t ctx[RTE_NODE_CTX_SZ] __rte_cache_aligned; /**< Node Context. */ From patchwork Tue Jun 6 14:47:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 128216 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 06B6142C40; Tue, 6 Jun 2023 16:55:51 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 56BFE42D62; Tue, 6 Jun 2023 16:55:26 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 9C17540223 for ; Tue, 6 Jun 2023 16:55:24 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686063324; x=1717599324; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7aohzRFhcFdQc4ZZoi/cHjaoAghOXj4OWk7YpmH2mVI=; b=RePMo7hPgTIbzzQTivdM6N8M/0ofvUTeEr1/8TvzTWOwdZAIR+Yx/TeZ 0XGPL4QTwyQvnhdqzrTlRNG3hKP6jMif7/BxYz5VTeKSF5vqSo3POWSq1 G0CuhgYFZy2LR3xTVuIwy3DaY2JlH/1dVwuuUxV24KS/mqOiPTeSaTcaG kUQDthCKpIBU+vY3Xh0h7o1h5WWrhUb4p2ucUkjwXl98VvLfRtYyuLTCS HjcW//qNXmih9iwzp/6gWrsqIlIu4mooo3tEq5Sw1Fr6ffPPeQlu2tbL1 BdPB+6cKQKqXtO4svQnVs+lF8tPZtjJP/6zwHUO1TDeaKp99LJbRGXUyX Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="356705419" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="356705419" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2023 07:55:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="1039222768" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="1039222768" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.94]) by fmsmga005.fm.intel.com with ESMTP; 06 Jun 2023 07:55:10 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com, stephen@networkplumber.org, pbhagavatula@marvell.com, jerinjacobk@gmail.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, mattias.ronnblom@ericsson.com, Zhirun Yan Subject: [PATCH v8 10/17] graph: introduce stream moving cross cores Date: Tue, 6 Jun 2023 22:47:39 +0800 Message-Id: <20230606144746.708388-11-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230606144746.708388-1-zhirun.yan@intel.com> References: <20230605111923.3772260-1-zhirun.yan@intel.com> <20230606144746.708388-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch introduces key functions to allow a worker thread to enable enqueue and move streams of objects to the next nodes over different cores for mcore dispatch model. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/graph.c | 6 +- lib/graph/graph_private.h | 31 ++++ lib/graph/meson.build | 2 +- lib/graph/rte_graph.h | 15 +- lib/graph/rte_graph_model_mcore_dispatch.c | 158 +++++++++++++++++++++ lib/graph/rte_graph_model_mcore_dispatch.h | 45 ++++++ lib/graph/version.map | 2 + 7 files changed, 254 insertions(+), 5 deletions(-) diff --git a/lib/graph/graph.c b/lib/graph/graph.c index 1fee835804..8307ff079c 100644 --- a/lib/graph/graph.c +++ b/lib/graph/graph.c @@ -471,7 +471,7 @@ rte_graph_destroy(rte_graph_t id) } static rte_graph_t -graph_clone(struct graph *parent_graph, const char *name) +graph_clone(struct graph *parent_graph, const char *name, struct rte_graph_param *prm) { struct graph_node *graph_node; struct graph *graph; @@ -545,14 +545,14 @@ graph_clone(struct graph *parent_graph, const char *name) } rte_graph_t -rte_graph_clone(rte_graph_t id, const char *name) +rte_graph_clone(rte_graph_t id, const char *name, struct rte_graph_param *prm) { struct graph *graph; GRAPH_ID_CHECK(id); STAILQ_FOREACH(graph, &graph_list, next) if (graph->id == id) - return graph_clone(graph, name); + return graph_clone(graph, name, prm); fail: return RTE_GRAPH_ID_INVALID; diff --git a/lib/graph/graph_private.h b/lib/graph/graph_private.h index d84174b667..d0ef13b205 100644 --- a/lib/graph/graph_private.h +++ b/lib/graph/graph_private.h @@ -414,4 +414,35 @@ void graph_dump(FILE *f, struct graph *g); */ void node_dump(FILE *f, struct node *n); +/** + * @internal + * + * Create the graph schedule work queue for mcore dispatch model. + * All cloned graphs attached to the parent graph MUST be destroyed together + * for fast schedule design limitation. + * + * @param _graph + * The graph object + * @param _parent_graph + * The parent graph object which holds the run-queue head. + * @param prm + * Graph parameter, includes model-specific parameters in this graph. + * + * @return + * - 0: Success. + * - <0: Graph schedule work queue related error. + */ +int graph_sched_wq_create(struct graph *_graph, struct graph *_parent_graph, + struct rte_graph_param *prm); + +/** + * @internal + * + * Destroy the graph schedule work queue for mcore dispatch model. + * + * @param _graph + * The graph object + */ +void graph_sched_wq_destroy(struct graph *_graph); + #endif /* _RTE_GRAPH_PRIVATE_H_ */ diff --git a/lib/graph/meson.build b/lib/graph/meson.build index 0685cf9e72..9d51eabe33 100644 --- a/lib/graph/meson.build +++ b/lib/graph/meson.build @@ -20,4 +20,4 @@ sources = files( ) headers = files('rte_graph.h', 'rte_graph_worker.h') -deps += ['eal', 'pcapng'] +deps += ['eal', 'pcapng', 'mempool', 'ring'] diff --git a/lib/graph/rte_graph.h b/lib/graph/rte_graph.h index 998cade200..2ffee520b1 100644 --- a/lib/graph/rte_graph.h +++ b/lib/graph/rte_graph.h @@ -169,6 +169,17 @@ struct rte_graph_param { bool pcap_enable; /**< Pcap enable. */ uint64_t num_pkt_to_capture; /**< Number of packets to capture. */ char *pcap_filename; /**< Filename in which packets to be captured.*/ + + RTE_STD_C11 + union { + struct { + uint64_t rsvd; /**< Reserved for rtc model. */ + } rtc; + struct { + uint32_t wq_size_max; /**< Maximum size of workqueue for dispatch model. */ + uint32_t mp_capacity; /**< Capacity of memory pool for dispatch model. */ + } dispatch; + }; }; /** @@ -260,12 +271,14 @@ int rte_graph_destroy(rte_graph_t id); * Name of the new graph. The library prepends the parent graph name to the * user-specified name. The final graph name will be, * "parent graph name" + "-" + name. + * @param prm + * Graph parameter, includes model-specific parameters in this graph. * * @return * Valid graph id on success, RTE_GRAPH_ID_INVALID otherwise. */ __rte_experimental -rte_graph_t rte_graph_clone(rte_graph_t id, const char *name); +rte_graph_t rte_graph_clone(rte_graph_t id, const char *name, struct rte_graph_param *prm); /** * Get graph id from graph name. diff --git a/lib/graph/rte_graph_model_mcore_dispatch.c b/lib/graph/rte_graph_model_mcore_dispatch.c index 9df2479a10..1d99384816 100644 --- a/lib/graph/rte_graph_model_mcore_dispatch.c +++ b/lib/graph/rte_graph_model_mcore_dispatch.c @@ -5,6 +5,164 @@ #include "graph_private.h" #include "rte_graph_model_mcore_dispatch.h" +int +graph_sched_wq_create(struct graph *_graph, struct graph *_parent_graph, + struct rte_graph_param *prm) +{ + struct rte_graph *parent_graph = _parent_graph->graph; + struct rte_graph *graph = _graph->graph; + unsigned int wq_size; + unsigned int flags = RING_F_SC_DEQ; + + wq_size = GRAPH_SCHED_WQ_SIZE(graph->nb_nodes); + wq_size = rte_align32pow2(wq_size + 1); + + if (prm->dispatch.wq_size_max > 0) + wq_size = wq_size <= (prm->dispatch.wq_size_max) ? wq_size : + prm->dispatch.wq_size_max; + + if (!rte_is_power_of_2(wq_size)) + flags |= RING_F_EXACT_SZ; + + graph->dispatch.wq = rte_ring_create(graph->name, wq_size, graph->socket, + flags); + if (graph->dispatch.wq == NULL) + SET_ERR_JMP(EIO, fail, "Failed to allocate graph WQ"); + + if (prm->dispatch.mp_capacity > 0) + wq_size = (wq_size <= prm->dispatch.mp_capacity) ? wq_size : + prm->dispatch.mp_capacity; + + graph->dispatch.mp = rte_mempool_create(graph->name, wq_size, + sizeof(struct graph_mcore_dispatch_wq_node), + 0, 0, NULL, NULL, NULL, NULL, + graph->socket, MEMPOOL_F_SP_PUT); + if (graph->dispatch.mp == NULL) + SET_ERR_JMP(EIO, fail_mp, + "Failed to allocate graph WQ schedule entry"); + + graph->dispatch.lcore_id = _graph->lcore_id; + + if (parent_graph->dispatch.rq == NULL) { + parent_graph->dispatch.rq = &parent_graph->dispatch.rq_head; + SLIST_INIT(parent_graph->dispatch.rq); + } + + graph->dispatch.rq = parent_graph->dispatch.rq; + SLIST_INSERT_HEAD(graph->dispatch.rq, graph, dispatch.rq_next); + + return 0; + +fail_mp: + rte_ring_free(graph->dispatch.wq); + graph->dispatch.wq = NULL; +fail: + return -rte_errno; +} + +void +graph_sched_wq_destroy(struct graph *_graph) +{ + struct rte_graph *graph = _graph->graph; + + if (graph == NULL) + return; + + rte_ring_free(graph->dispatch.wq); + graph->dispatch.wq = NULL; + + rte_mempool_free(graph->dispatch.mp); + graph->dispatch.mp = NULL; +} + +static __rte_always_inline bool +__graph_sched_node_enqueue(struct rte_node *node, struct rte_graph *graph) +{ + struct graph_mcore_dispatch_wq_node *wq_node; + uint16_t off = 0; + uint16_t size; + +submit_again: + if (rte_mempool_get(graph->dispatch.mp, (void **)&wq_node) < 0) + goto fallback; + + size = RTE_MIN(node->idx, RTE_DIM(wq_node->objs)); + wq_node->node_off = node->off; + wq_node->nb_objs = size; + rte_memcpy(wq_node->objs, &node->objs[off], size * sizeof(void *)); + + while (rte_ring_mp_enqueue_bulk_elem(graph->dispatch.wq, (void *)&wq_node, + sizeof(wq_node), 1, NULL) == 0) + rte_pause(); + + off += size; + node->idx -= size; + if (node->idx > 0) + goto submit_again; + + return true; + +fallback: + if (off != 0) + memmove(&node->objs[0], &node->objs[off], + node->idx * sizeof(void *)); + + return false; +} + +bool __rte_noinline +__rte_graph_mcore_dispatch_sched_node_enqueue(struct rte_node *node, + struct rte_graph_rq_head *rq) +{ + const unsigned int lcore_id = node->dispatch.lcore_id; + struct rte_graph *graph; + + SLIST_FOREACH(graph, rq, dispatch.rq_next) + if (graph->dispatch.lcore_id == lcore_id) + break; + + return graph != NULL ? __graph_sched_node_enqueue(node, graph) : false; +} + +void +__rte_graph_mcore_dispatch_sched_wq_process(struct rte_graph *graph) +{ +#define WQ_SZ 32 + struct graph_mcore_dispatch_wq_node *wq_node; + struct rte_mempool *mp = graph->dispatch.mp; + struct rte_ring *wq = graph->dispatch.wq; + uint16_t idx, free_space; + struct rte_node *node; + unsigned int i, n; + struct graph_mcore_dispatch_wq_node *wq_nodes[WQ_SZ]; + + n = rte_ring_sc_dequeue_burst_elem(wq, wq_nodes, sizeof(wq_nodes[0]), + RTE_DIM(wq_nodes), NULL); + if (n == 0) + return; + + for (i = 0; i < n; i++) { + wq_node = wq_nodes[i]; + node = RTE_PTR_ADD(graph, wq_node->node_off); + RTE_ASSERT(node->fence == RTE_GRAPH_FENCE); + idx = node->idx; + free_space = node->size - idx; + + if (unlikely(free_space < wq_node->nb_objs)) + __rte_node_stream_alloc_size(graph, node, node->size + wq_node->nb_objs); + + memmove(&node->objs[idx], wq_node->objs, wq_node->nb_objs * sizeof(void *)); + node->idx = idx + wq_node->nb_objs; + + __rte_node_process(graph, node); + + wq_node->nb_objs = 0; + node->idx = 0; + } + + rte_mempool_put_bulk(mp, (void **)wq_nodes, n); +} + int rte_graph_model_mcore_dispatch_node_lcore_affinity_set(const char *name, unsigned int lcore_id) { diff --git a/lib/graph/rte_graph_model_mcore_dispatch.h b/lib/graph/rte_graph_model_mcore_dispatch.h index 7da0483d13..6163f96c37 100644 --- a/lib/graph/rte_graph_model_mcore_dispatch.h +++ b/lib/graph/rte_graph_model_mcore_dispatch.h @@ -20,8 +20,53 @@ extern "C" { #endif +#include +#include +#include +#include + #include "rte_graph_worker_common.h" +#define GRAPH_SCHED_WQ_SIZE_MULTIPLIER 8 +#define GRAPH_SCHED_WQ_SIZE(nb_nodes) \ + ((typeof(nb_nodes))((nb_nodes) * GRAPH_SCHED_WQ_SIZE_MULTIPLIER)) + +/** + * @internal + * + * Schedule the node to the right graph's work queue for mcore dispatch model. + * + * @param node + * Pointer to the scheduled node object. + * @param rq + * Pointer to the scheduled run-queue for all graphs. + * + * @return + * True on success, false otherwise. + * + * @note + * This implementation is used by mcore dispatch model only and user application + * should not call it directly. + */ +__rte_experimental +bool __rte_noinline __rte_graph_mcore_dispatch_sched_node_enqueue(struct rte_node *node, + struct rte_graph_rq_head *rq); + +/** + * @internal + * + * Process all nodes (streams) in the graph's work queue for mcore dispatch model. + * + * @param graph + * Pointer to the graph object. + * + * @note + * This implementation is used by mcore dispatch model only and user application + * should not call it directly. + */ +__rte_experimental +void __rte_graph_mcore_dispatch_sched_wq_process(struct rte_graph *graph); + /** * Set lcore affinity with the node used for mcore dispatch model. * diff --git a/lib/graph/version.map b/lib/graph/version.map index eccecc8767..d33c453d97 100644 --- a/lib/graph/version.map +++ b/lib/graph/version.map @@ -48,6 +48,8 @@ EXPERIMENTAL { rte_graph_worker_model_set; rte_graph_worker_model_get; + __rte_graph_mcore_dispatch_sched_wq_process; + __rte_graph_mcore_dispatch_sched_node_enqueue; rte_graph_model_mcore_dispatch_node_lcore_affinity_set; From patchwork Tue Jun 6 14:47:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 128217 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9487442C40; Tue, 6 Jun 2023 16:55:56 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5630542D74; Tue, 6 Jun 2023 16:55:27 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id B0B7C42D1D for ; Tue, 6 Jun 2023 16:55:25 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686063325; x=1717599325; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JXh/wlQFw8fp6N8uGAnSg/s9CS3hV2TY2lIg7fI74Is=; b=QpwcHE4XmUHfEPrWxvoZiGs5BMok3skO4CQU4I9W6ST781koQit7uPSj Ws3jDSxuGXwVCcYAv8HWF7vexLvoVGSp7pq+/lDFLdEsaxVN8/PR3gW6f MMJq0YpQ0i35bfFf3vlpBQKwDkt6oxdP5rxTp2D7X8182hhNHYd3meAhh bYrNaWtrT0mcLl85KN1t7Bt3a0kS7D3i5tH7uw2ciCPaCOmdYd04vqfTp f1UTIbKk4io7vHnLzn1jGCbvuqYYKLAbnHCte9PFIku1Ou78g8rhkjrrC j8PjJrIEWsuEISRv3+2dpy5lQA9jgIzghR+W1PNMitqSkvTdtqTKfaUf1 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="356705445" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="356705445" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2023 07:55:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="1039222809" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="1039222809" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.94]) by fmsmga005.fm.intel.com with ESMTP; 06 Jun 2023 07:55:13 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com, stephen@networkplumber.org, pbhagavatula@marvell.com, jerinjacobk@gmail.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, mattias.ronnblom@ericsson.com, Zhirun Yan Subject: [PATCH v8 11/17] graph: enable create and destroy graph scheduling workqueue Date: Tue, 6 Jun 2023 22:47:40 +0800 Message-Id: <20230606144746.708388-12-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230606144746.708388-1-zhirun.yan@intel.com> References: <20230605111923.3772260-1-zhirun.yan@intel.com> <20230606144746.708388-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch enables to create and destroy scheduling workqueue into common graph operations. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/graph.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/lib/graph/graph.c b/lib/graph/graph.c index 8307ff079c..18318b5745 100644 --- a/lib/graph/graph.c +++ b/lib/graph/graph.c @@ -449,6 +449,11 @@ rte_graph_destroy(rte_graph_t id) while (graph != NULL) { tmp = STAILQ_NEXT(graph, next); if (graph->id == id) { + /* Destroy the schedule work queue if has */ + if (rte_graph_worker_model_get(graph->graph) == + RTE_GRAPH_MODEL_MCORE_DISPATCH) + graph_sched_wq_destroy(graph); + /* Call fini() of the all the nodes in the graph */ graph_node_fini(graph); /* Destroy graph fast path memory */ @@ -522,6 +527,11 @@ graph_clone(struct graph *parent_graph, const char *name, struct rte_graph_param /* Clone the graph model */ graph->graph->model = parent_graph->graph->model; + /* Create the graph schedule work queue */ + if (rte_graph_worker_model_get(graph->graph) == RTE_GRAPH_MODEL_MCORE_DISPATCH && + graph_sched_wq_create(graph, parent_graph, prm)) + goto graph_mem_destroy; + /* Call init() of the all the nodes in the graph */ if (graph_node_init(graph)) goto graph_mem_destroy; From patchwork Tue Jun 6 14:47:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 128218 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6C87F42C40; Tue, 6 Jun 2023 16:56:02 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 634E842D33; Tue, 6 Jun 2023 16:55:33 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id C8A8D42C4D for ; Tue, 6 Jun 2023 16:55:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686063331; x=1717599331; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=J6yvh+QMdHB91tvUuhJGraJTLG7gKn7LGa0hBaNxO6E=; b=jnOQNwGbzf9y7riq4NyQPpDlEhBPq7PFqSX3fr6dDGewPIaPai5xA/Pu NpSBBId9lXNQHeNsz7lUpYdwnUYYGxV7uSS0TOBKy7t95CDSU7JYs/1GV 5qLSGqlmMEZN0++ntAVP39+HRiXXeiIwXOSmdC8st7ypgKSInjrfK3x8c vf3Sx79U5lWyq6JueBBtd88qH4XIeDRfkb3LdXOC3eeLSCAW3rD4L7Vax aSJrqA5NIqmp1m6qkay59pnk9258IH6jzh2yBngzCK+NQO6BaMTox/Ej4 +pmPGo8GZjJZzBalFoj7ckX6utndWkp1loCL+OFITAjjNhngWHtqKvjYH A==; X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="356705528" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="356705528" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2023 07:55:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="1039222850" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="1039222850" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.94]) by fmsmga005.fm.intel.com with ESMTP; 06 Jun 2023 07:55:16 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com, stephen@networkplumber.org, pbhagavatula@marvell.com, jerinjacobk@gmail.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, mattias.ronnblom@ericsson.com, Zhirun Yan Subject: [PATCH v8 12/17] graph: introduce graph walk by cross-core dispatch Date: Tue, 6 Jun 2023 22:47:41 +0800 Message-Id: <20230606144746.708388-13-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230606144746.708388-1-zhirun.yan@intel.com> References: <20230605111923.3772260-1-zhirun.yan@intel.com> <20230606144746.708388-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch introduces the task scheduler mechanism to enable dispatching tasks to another worker cores. Currently, there is only a local work queue for one graph to walk. We introduce a scheduler worker queue in each worker core for dispatching tasks. It will perform the walk on scheduler work queue first, then handle the local work queue. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/rte_graph_model_mcore_dispatch.h | 44 ++++++++++++++++++++++ 1 file changed, 44 insertions(+) diff --git a/lib/graph/rte_graph_model_mcore_dispatch.h b/lib/graph/rte_graph_model_mcore_dispatch.h index 6163f96c37..c78a3bbdf9 100644 --- a/lib/graph/rte_graph_model_mcore_dispatch.h +++ b/lib/graph/rte_graph_model_mcore_dispatch.h @@ -83,6 +83,50 @@ __rte_experimental int rte_graph_model_mcore_dispatch_node_lcore_affinity_set(const char *name, unsigned int lcore_id); +/** + * Perform graph walk on the circular buffer and invoke the process function + * of the nodes and collect the stats. + * + * @param graph + * Graph pointer returned from rte_graph_lookup function. + * + * @see rte_graph_lookup() + */ +__rte_experimental +static inline void +rte_graph_walk_mcore_dispatch(struct rte_graph *graph) +{ + const rte_graph_off_t *cir_start = graph->cir_start; + const rte_node_t mask = graph->cir_mask; + uint32_t head = graph->head; + struct rte_node *node; + + RTE_ASSERT(graph->parent_id != RTE_GRAPH_ID_INVALID); + if (graph->dispatch.wq != NULL) + __rte_graph_mcore_dispatch_sched_wq_process(graph); + + while (likely(head != graph->tail)) { + node = (struct rte_node *)RTE_PTR_ADD(graph, cir_start[(int32_t)head++]); + + /* skip the src nodes which not bind with current worker */ + if ((int32_t)head < 0 && node->dispatch.lcore_id != graph->dispatch.lcore_id) + continue; + + /* Schedule the node until all task/objs are done */ + if (node->dispatch.lcore_id != RTE_MAX_LCORE && + graph->dispatch.lcore_id != node->dispatch.lcore_id && + graph->dispatch.rq != NULL && + __rte_graph_mcore_dispatch_sched_node_enqueue(node, graph->dispatch.rq)) + continue; + + __rte_node_process(graph, node); + + head = likely((int32_t)head > 0) ? head & mask : head; + } + + graph->tail = 0; +} + #ifdef __cplusplus } #endif From patchwork Tue Jun 6 14:47:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 128219 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 603AE42C40; Tue, 6 Jun 2023 16:56:08 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 84195411F3; Tue, 6 Jun 2023 16:55:35 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 71F6B40A84 for ; Tue, 6 Jun 2023 16:55:32 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686063332; x=1717599332; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hLZR5LglfScRN4CMn6xPgoQ+JdEjh2CHjXIb/73lWbk=; b=a0gjmbHBGECHrJzws6XvgvmPEFxF+YQN59RWgdq0/PF8mreGEhI+X5XL 49XOPgaYdBgD+/cm+SMOTCwRSFg8+5qb51CkIlY66oOipTIB5aZgAD1qZ px96kRNswG6yj15+HbUKnt8rsQNWn/nj7LoF01Zm3zrkOuRfXSKQbq1/g HNydL6M58kz1RQ02D6MEj+PsMCOifb1EDgj0DNwo36wnv8t8dgBTXD2vy /IRtR9x/rtxmqVCfG0pQS3GMFyN6alk7bgxhktuVIvCPsk2C1SdSGSgRp 9Eqz/2M9gN5eB9oGkfVK2ot9sBlr6asZr98QZe9nSZHlIwuH4GvNskgpq w==; X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="356705575" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="356705575" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2023 07:55:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="1039222893" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="1039222893" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.94]) by fmsmga005.fm.intel.com with ESMTP; 06 Jun 2023 07:55:19 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com, stephen@networkplumber.org, pbhagavatula@marvell.com, jerinjacobk@gmail.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, mattias.ronnblom@ericsson.com, Zhirun Yan Subject: [PATCH v8 13/17] graph: enable graph multicore dispatch scheduler model Date: Tue, 6 Jun 2023 22:47:42 +0800 Message-Id: <20230606144746.708388-14-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230606144746.708388-1-zhirun.yan@intel.com> References: <20230605111923.3772260-1-zhirun.yan@intel.com> <20230606144746.708388-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch enables to chose new scheduler model. Must define RTE_GRAPH_MODEL_SELECT before including rte_graph_worker.h to enable specific model choosing. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/rte_graph_worker.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/lib/graph/rte_graph_worker.h b/lib/graph/rte_graph_worker.h index 5b58f7bda9..69bdd0e074 100644 --- a/lib/graph/rte_graph_worker.h +++ b/lib/graph/rte_graph_worker.h @@ -11,6 +11,7 @@ extern "C" { #endif #include "rte_graph_model_rtc.h" +#include "rte_graph_model_mcore_dispatch.h" /** * Perform graph walk on the circular buffer and invoke the process function @@ -25,7 +26,18 @@ __rte_experimental static inline void rte_graph_walk(struct rte_graph *graph) { +#if !defined(RTE_GRAPH_MODEL_SELECT) || RTE_GRAPH_MODEL_SELECT == RTE_GRAPH_MODEL_RTC rte_graph_walk_rtc(graph); +#elif defined(RTE_GRAPH_MODEL_SELECT) && RTE_GRAPH_MODEL_SELECT == RTE_GRAPH_MODEL_MCORE_DISPATCH + rte_graph_walk_mcore_dispatch(graph); +#else + int model = rte_graph_worker_model_get(graph); + + if (model == RTE_GRAPH_MODEL_RTC) + rte_graph_walk_rtc(graph); + else if (model == RTE_GRAPH_MODEL_MCORE_DISPATCH) + rte_graph_walk_mcore_dispatch(graph); +#endif } #ifdef __cplusplus From patchwork Tue Jun 6 14:47:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 128220 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C3FDF42C40; Tue, 6 Jun 2023 16:56:13 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C873042D69; Tue, 6 Jun 2023 16:55:36 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id B4A9540A84 for ; Tue, 6 Jun 2023 16:55:34 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686063334; x=1717599334; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EqDyHTzCc0irCnKAMbTd2N+aF6559vss049R5DiJJCY=; b=XOEMzxVL8Cz5TZa45epjpNVxDw73C8fyd8RzxIC58atLoRE7Up29/49e ejnUHYgbC8ByLwJ1O9o7w/NdKWrP6ww0XyXzCeeTL1z46nUnpN9xyW4sx kERAI4pbREBvBwu6fJ9+XU4gb5w0qkwQLFJ+VVPHMjzHkRzpszivXV+Dg oOFcgvLshvhGLdkS8TeaMazGw1CEyp7yDBiDb7q4Bjw83Qq1+QFDWVMjb JNN3kvq2MwERGWUjujmX6/z/o4pyXVwCbhLppfk8Lb6kerUpsG6+JOoNn yrbgnIVUymrbFQq0ruzh5WArovbGVrklJjsXr4NkJq47mgJn3FObNPx5n g==; X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="356705623" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="356705623" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2023 07:55:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="1039222935" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="1039222935" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.94]) by fmsmga005.fm.intel.com with ESMTP; 06 Jun 2023 07:55:22 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com, stephen@networkplumber.org, pbhagavatula@marvell.com, jerinjacobk@gmail.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, mattias.ronnblom@ericsson.com, Zhirun Yan Subject: [PATCH v8 14/17] graph: add stats for cross-core dispatching Date: Tue, 6 Jun 2023 22:47:43 +0800 Message-Id: <20230606144746.708388-15-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230606144746.708388-1-zhirun.yan@intel.com> References: <20230605111923.3772260-1-zhirun.yan@intel.com> <20230606144746.708388-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add stats for cross-core dispatching scheduler if stats collection is enabled. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/graph_debug.c | 6 ++ lib/graph/graph_stats.c | 76 +++++++++++++++++++--- lib/graph/rte_graph.h | 10 +++ lib/graph/rte_graph_model_mcore_dispatch.c | 3 + lib/graph/rte_graph_worker_common.h | 2 + 5 files changed, 89 insertions(+), 8 deletions(-) diff --git a/lib/graph/graph_debug.c b/lib/graph/graph_debug.c index b84412f5dd..cbb512e120 100644 --- a/lib/graph/graph_debug.c +++ b/lib/graph/graph_debug.c @@ -74,6 +74,12 @@ rte_graph_obj_dump(FILE *f, struct rte_graph *g, bool all) fprintf(f, " size=%d\n", n->size); fprintf(f, " idx=%d\n", n->idx); fprintf(f, " total_objs=%" PRId64 "\n", n->total_objs); + if (rte_graph_worker_model_get(g) == RTE_GRAPH_MODEL_MCORE_DISPATCH) { + fprintf(f, " total_sched_objs=%" PRId64 "\n", + n->total_sched_objs); + fprintf(f, " total_sched_fail=%" PRId64 "\n", + n->total_sched_fail); + } fprintf(f, " total_calls=%" PRId64 "\n", n->total_calls); for (i = 0; i < n->nb_edges; i++) fprintf(f, " edge[%d] <%s>\n", i, diff --git a/lib/graph/graph_stats.c b/lib/graph/graph_stats.c index c0140ba922..210ba01f5c 100644 --- a/lib/graph/graph_stats.c +++ b/lib/graph/graph_stats.c @@ -40,13 +40,19 @@ struct rte_graph_cluster_stats { struct cluster_node clusters[]; } __rte_cache_aligned; +#define boarder_model_dispatch() \ + fprintf(f, "+-------------------------------+---------------+--------" \ + "-------+---------------+---------------+---------------+" \ + "---------------+---------------+-" \ + "----------+\n") + #define boarder() \ fprintf(f, "+-------------------------------+---------------+--------" \ "-------+---------------+---------------+---------------+-" \ "----------+\n") static inline void -print_banner(FILE *f) +print_banner_default(FILE *f) { boarder(); fprintf(f, "%-32s%-16s%-16s%-16s%-16s%-16s%-16s\n", "|Node", "|calls", @@ -55,6 +61,28 @@ print_banner(FILE *f) boarder(); } +static inline void +print_banner_dispatch(FILE *f) +{ + boarder_model_dispatch(); + fprintf(f, "%-32s%-16s%-16s%-16s%-16s%-16s%-16s%-16s%-16s\n", + "|Node", "|calls", + "|objs", "|sched objs", "|sched fail", + "|realloc_count", "|objs/call", "|objs/sec(10E6)", + "|cycles/call|"); + boarder_model_dispatch(); +} + +static inline void +print_banner(FILE *f) +{ + if (rte_graph_worker_model_get(STAILQ_FIRST(graph_list_head_get())->graph) == + RTE_GRAPH_MODEL_MCORE_DISPATCH) + print_banner_dispatch(f); + else + print_banner_default(f); +} + static inline void print_node(FILE *f, const struct rte_graph_cluster_node_stats *stat) { @@ -76,11 +104,22 @@ print_node(FILE *f, const struct rte_graph_cluster_node_stats *stat) objs_per_sec = ts_per_hz ? (objs - prev_objs) / ts_per_hz : 0; objs_per_sec /= 1000000; - fprintf(f, - "|%-31s|%-15" PRIu64 "|%-15" PRIu64 "|%-15" PRIu64 - "|%-15.3f|%-15.6f|%-11.4f|\n", - stat->name, calls, objs, stat->realloc_count, objs_per_call, - objs_per_sec, cycles_per_call); + if (rte_graph_worker_model_get(STAILQ_FIRST(graph_list_head_get())->graph) == + RTE_GRAPH_MODEL_MCORE_DISPATCH) { + fprintf(f, + "|%-31s|%-15" PRIu64 "|%-15" PRIu64 "|%-15" PRIu64 + "|%-15" PRIu64 "|%-15" PRIu64 + "|%-15.3f|%-15.6f|%-11.4f|\n", + stat->name, calls, objs, stat->dispatch.sched_objs, + stat->dispatch.sched_fail, stat->realloc_count, objs_per_call, + objs_per_sec, cycles_per_call); + } else { + fprintf(f, + "|%-31s|%-15" PRIu64 "|%-15" PRIu64 "|%-15" PRIu64 + "|%-15.3f|%-15.6f|%-11.4f|\n", + stat->name, calls, objs, stat->realloc_count, objs_per_call, + objs_per_sec, cycles_per_call); + } } static int @@ -88,13 +127,20 @@ graph_cluster_stats_cb(bool is_first, bool is_last, void *cookie, const struct rte_graph_cluster_node_stats *stat) { FILE *f = cookie; + int model; + + model = rte_graph_worker_model_get(STAILQ_FIRST(graph_list_head_get())->graph); if (unlikely(is_first)) print_banner(f); if (stat->objs) print_node(f, stat); - if (unlikely(is_last)) - boarder(); + if (unlikely(is_last)) { + if (model == RTE_GRAPH_MODEL_MCORE_DISPATCH) + boarder_model_dispatch(); + else + boarder(); + } return 0; }; @@ -333,12 +379,20 @@ cluster_node_arregate_stats(struct cluster_node *cluster) { uint64_t calls = 0, cycles = 0, objs = 0, realloc_count = 0; struct rte_graph_cluster_node_stats *stat = &cluster->stat; + uint64_t sched_objs = 0, sched_fail = 0; struct rte_node *node; rte_node_t count; + int model; + model = rte_graph_worker_model_get(STAILQ_FIRST(graph_list_head_get())->graph); for (count = 0; count < cluster->nb_nodes; count++) { node = cluster->nodes[count]; + if (model == RTE_GRAPH_MODEL_MCORE_DISPATCH) { + sched_objs += node->total_sched_objs; + sched_fail += node->total_sched_fail; + } + calls += node->total_calls; objs += node->total_objs; cycles += node->total_cycles; @@ -348,6 +402,12 @@ cluster_node_arregate_stats(struct cluster_node *cluster) stat->calls = calls; stat->objs = objs; stat->cycles = cycles; + + if (model == RTE_GRAPH_MODEL_MCORE_DISPATCH) { + stat->dispatch.sched_objs = sched_objs; + stat->dispatch.sched_fail = sched_fail; + } + stat->ts = rte_get_timer_cycles(); stat->realloc_count = realloc_count; } diff --git a/lib/graph/rte_graph.h b/lib/graph/rte_graph.h index 2ffee520b1..28e50e49b8 100644 --- a/lib/graph/rte_graph.h +++ b/lib/graph/rte_graph.h @@ -220,6 +220,16 @@ struct rte_graph_cluster_node_stats { uint64_t prev_objs; /**< Previous number of processed objs. */ uint64_t prev_cycles; /**< Previous number of cycles. */ + RTE_STD_C11 + union { + struct { + uint64_t sched_objs; + /**< Previous number of scheduled objs for dispatch model. */ + uint64_t sched_fail; + /**< Previous number of failed schedule objs for dispatch model. */ + } dispatch; + }; + uint64_t realloc_count; /**< Realloc count. */ rte_node_t id; /**< Node identifier of stats. */ diff --git a/lib/graph/rte_graph_model_mcore_dispatch.c b/lib/graph/rte_graph_model_mcore_dispatch.c index 1d99384816..b379568d6c 100644 --- a/lib/graph/rte_graph_model_mcore_dispatch.c +++ b/lib/graph/rte_graph_model_mcore_dispatch.c @@ -96,6 +96,7 @@ __graph_sched_node_enqueue(struct rte_node *node, struct rte_graph *graph) rte_pause(); off += size; + node->total_sched_objs += size; node->idx -= size; if (node->idx > 0) goto submit_again; @@ -107,6 +108,8 @@ __graph_sched_node_enqueue(struct rte_node *node, struct rte_graph *graph) memmove(&node->objs[0], &node->objs[off], node->idx * sizeof(void *)); + node->total_sched_fail += node->idx; + return false; } diff --git a/lib/graph/rte_graph_worker_common.h b/lib/graph/rte_graph_worker_common.h index a9a72a6005..cd49518164 100644 --- a/lib/graph/rte_graph_worker_common.h +++ b/lib/graph/rte_graph_worker_common.h @@ -110,6 +110,8 @@ struct rte_node { unsigned int lcore_id; /**< Node running lcore. */ } dispatch; }; + uint64_t total_sched_objs; /**< Number of objects scheduled. */ + uint64_t total_sched_fail; /**< Number of scheduled failure. */ /* Fast path area */ #define RTE_NODE_CTX_SZ 16 uint8_t ctx[RTE_NODE_CTX_SZ] __rte_cache_aligned; /**< Node Context. */ From patchwork Tue Jun 6 14:47:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 128221 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D063842C40; Tue, 6 Jun 2023 16:56:20 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 58A6242D98; Tue, 6 Jun 2023 16:55:39 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 2500842D82 for ; Tue, 6 Jun 2023 16:55:36 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686063337; x=1717599337; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JUa3REtKezppJy2yY4qVT5DBInLi5k/Hf4zZEP4w1Uc=; b=PcI/Xc1w0J7NoEkDDkYhkAIGdADbZq2aLC1fmb/h5QkLTTpxAAEQlalg gDl2J38LKh+f1S6P/XekEDKn1+l/rjk0kI9hwKBrfm7ivikerh/jzbQwE jjoopFvYWeEeZyEaG+seYOMHZxdeuGowNU6/BiwCbuNqPtcF9Dw4Yj7aW rKIJekRwb5NIQMtQD5dIKJ/Co7M+gG7Hevhc+JT54QyNzwajJYDSA4BbN FUoJtQMPi5itYaEVJnhM0LxkL7fS7taDSk+J7ZsnRHZssE37vECrpqgiX sfUETdPVEwWEzS2WJ22qLYCqcik/8cGoLMczKM/V3szGB/zccwtYVvajz A==; X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="356705672" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="356705672" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2023 07:55:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="1039222978" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="1039222978" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.94]) by fmsmga005.fm.intel.com with ESMTP; 06 Jun 2023 07:55:25 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com, stephen@networkplumber.org, pbhagavatula@marvell.com, jerinjacobk@gmail.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, mattias.ronnblom@ericsson.com, Zhirun Yan Subject: [PATCH v8 15/17] examples/l3fwd-graph: introduce multicore dispatch worker model Date: Tue, 6 Jun 2023 22:47:44 +0800 Message-Id: <20230606144746.708388-16-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230606144746.708388-1-zhirun.yan@intel.com> References: <20230605111923.3772260-1-zhirun.yan@intel.com> <20230606144746.708388-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add new parameter "model" to choose mcore dispatch or rtc model. And in dispatch model, the node will affinity to worker core successively. Note: RTE_GRAPH_MODEL_SELECT is set to GRAPH_MODEL_RTC by default. Must set model the same as RTE_GRAPH_MODEL_SELECT If set it as rtc or mcore dispatch explicitly. GRAPH_MODEL_MCORE_RUNTIME_SELECT means it could choose by model in runtime. Only support one RX node for mcore dispatch model in current implementation. ./dpdk-l3fwd-graph -l 8,9,10,11 -n 4 -- -p 0x1 --config="(0,0,9)" -P --model="dispatch" Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- examples/l3fwd-graph/main.c | 231 ++++++++++++++++++++++++++++++------ 1 file changed, 193 insertions(+), 38 deletions(-) diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c index 5feeab4f0f..77a5a98aec 100644 --- a/examples/l3fwd-graph/main.c +++ b/examples/l3fwd-graph/main.c @@ -23,6 +23,12 @@ #include #include #include +#define GRAPH_MODEL_RTC 0 /* Run-to-completion model, set by default. */ +#define GRAPH_MODEL_MCORE_DISPATCH 1 /* Dispatch model. */ +#define GRAPH_MODEL_MCORE_RUNTIME_SELECT 2 /* Support to select model by */ + /* parsing model in cmdline. */ +#undef RTE_GRAPH_MODEL_SELECT +#define RTE_GRAPH_MODEL_SELECT GRAPH_MODEL_RTC #include #include #include @@ -55,6 +61,9 @@ #define NB_SOCKETS 8 +/* Graph module */ +#define WORKER_MODEL_RTC "rtc" +#define WORKER_MODEL_MCORE_DISPATCH "dispatch" /* Static global variables used within this file. */ static uint16_t nb_rxd = RX_DESC_DEFAULT; static uint16_t nb_txd = TX_DESC_DEFAULT; @@ -88,6 +97,8 @@ struct lcore_rx_queue { char node_name[RTE_NODE_NAMESIZE]; }; +static uint32_t model_conf = RTE_GRAPH_MODEL_DEFAULT; + /* Lcore conf */ struct lcore_conf { uint16_t n_rx_queue; @@ -153,6 +164,19 @@ static struct ipv4_l3fwd_lpm_route ipv4_l3fwd_lpm_route_array[] = { {RTE_IPV4(198, 18, 6, 0), 24, 6}, {RTE_IPV4(198, 18, 7, 0), 24, 7}, }; +static int +check_worker_model_params(void) +{ + if (model_conf == RTE_GRAPH_MODEL_MCORE_DISPATCH && + nb_lcore_params > 1) { + printf("Exceeded max number of lcore params for remote model: %hu\n", + nb_lcore_params); + return -1; + } + + return 0; +} + static int check_lcore_params(void) { @@ -276,6 +300,7 @@ print_usage(const char *prgname) " --eth-dest=X,MM:MM:MM:MM:MM:MM: Ethernet destination for " "port X\n" " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n" + " --model NAME: walking model name, dispatch or rtc(by default)\n" " --no-numa: Disable numa awareness\n" " --per-port-pool: Use separate buffer pool per port\n" " --pcap-enable: Enables pcap capture\n" @@ -318,6 +343,19 @@ parse_max_pkt_len(const char *pktlen) return len; } +static void +parse_worker_model(const char *model) +{ + if (strcmp(model, WORKER_MODEL_MCORE_DISPATCH) == 0) + model_conf = RTE_GRAPH_MODEL_MCORE_DISPATCH; + else if (strcmp(model, WORKER_MODEL_RTC) == 0) + model_conf = RTE_GRAPH_MODEL_RTC; + + if (model_conf != RTE_GRAPH_MODEL_SELECT && + RTE_GRAPH_MODEL_SELECT <= RTE_GRAPH_MODEL_MCORE_DISPATCH) + rte_exit(EXIT_FAILURE, "Invalid worker model: %s", model); +} + static int parse_portmask(const char *portmask) { @@ -434,6 +472,8 @@ static const char short_options[] = "p:" /* portmask */ #define CMD_LINE_OPT_PCAP_ENABLE "pcap-enable" #define CMD_LINE_OPT_NUM_PKT_CAP "pcap-num-cap" #define CMD_LINE_OPT_PCAP_FILENAME "pcap-file-name" +#define CMD_LINE_OPT_WORKER_MODEL "model" + enum { /* Long options mapped to a short option */ @@ -449,6 +489,7 @@ enum { CMD_LINE_OPT_PARSE_PCAP_ENABLE, CMD_LINE_OPT_PARSE_NUM_PKT_CAP, CMD_LINE_OPT_PCAP_FILENAME_CAP, + CMD_LINE_OPT_WORKER_MODEL_TYPE, }; static const struct option lgopts[] = { @@ -460,6 +501,7 @@ static const struct option lgopts[] = { {CMD_LINE_OPT_PCAP_ENABLE, 0, 0, CMD_LINE_OPT_PARSE_PCAP_ENABLE}, {CMD_LINE_OPT_NUM_PKT_CAP, 1, 0, CMD_LINE_OPT_PARSE_NUM_PKT_CAP}, {CMD_LINE_OPT_PCAP_FILENAME, 1, 0, CMD_LINE_OPT_PCAP_FILENAME_CAP}, + {CMD_LINE_OPT_WORKER_MODEL, 1, 0, CMD_LINE_OPT_WORKER_MODEL_TYPE}, {NULL, 0, 0, 0}, }; @@ -551,6 +593,11 @@ parse_args(int argc, char **argv) printf("Pcap file name: %s\n", pcap_filename); break; + case CMD_LINE_OPT_WORKER_MODEL_TYPE: + printf("Use new worker model: %s\n", optarg); + parse_worker_model(optarg); + break; + default: print_usage(prgname); return -1; @@ -788,6 +835,142 @@ config_port_max_pkt_len(struct rte_eth_conf *conf, return 0; } +static void +graph_config_mcore_dispatch(struct rte_graph_param graph_conf) +{ + uint16_t nb_patterns = graph_conf.nb_node_patterns; + int worker_count = rte_lcore_count() - 1; + int main_lcore_id = rte_get_main_lcore(); + rte_graph_t main_graph_id = 0; + struct rte_node *node_tmp; + struct lcore_conf *qconf; + struct rte_graph *graph; + rte_graph_t graph_id; + rte_graph_off_t off; + int n_rx_node = 0; + int worker_lcore; + rte_node_t count; + int i, j; + int ret; + + for (j = 0; j < nb_lcore_params; j++) { + qconf = &lcore_conf[lcore_params[j].lcore_id]; + /* Add rx node patterns of all lcore */ + for (i = 0; i < qconf->n_rx_queue; i++) { + char *node_name = qconf->rx_queue_list[i].node_name; + unsigned int lcore_id = lcore_params[j].lcore_id; + + graph_conf.node_patterns[nb_patterns + n_rx_node + i] = node_name; + n_rx_node++; + ret = rte_graph_model_mcore_dispatch_node_lcore_affinity_set(node_name, + lcore_id); + if (ret == 0) + printf("Set node %s affinity to lcore %u\n", node_name, + lcore_params[j].lcore_id); + } + } + + graph_conf.nb_node_patterns = nb_patterns + n_rx_node; + graph_conf.socket_id = rte_lcore_to_socket_id(main_lcore_id); + + qconf = &lcore_conf[main_lcore_id]; + snprintf(qconf->name, sizeof(qconf->name), "worker_%u", + main_lcore_id); + + /* create main graph */ + main_graph_id = rte_graph_create(qconf->name, &graph_conf); + if (main_graph_id == RTE_GRAPH_ID_INVALID) + rte_exit(EXIT_FAILURE, + "rte_graph_create(): main_graph_id invalid for lcore %u\n", + main_lcore_id); + + /* set the graph model for the main graph */ + rte_graph_worker_model_set(RTE_GRAPH_MODEL_MCORE_DISPATCH); + qconf->graph_id = main_graph_id; + qconf->graph = rte_graph_lookup(qconf->name); + if (!qconf->graph) + rte_exit(EXIT_FAILURE, + "rte_graph_lookup(): graph %s not found\n", + qconf->name); + + graph = qconf->graph; + worker_lcore = lcore_params[nb_lcore_params - 1].lcore_id; + rte_graph_foreach_node(count, off, graph, node_tmp) { + /* Need to set the node Lcore affinity before clone graph for each lcore */ + if (node_tmp->dispatch.lcore_id == RTE_MAX_LCORE) { + worker_lcore = rte_get_next_lcore(worker_lcore, true, 1); + ret = rte_graph_model_mcore_dispatch_node_lcore_affinity_set(node_tmp->name, + worker_lcore); + if (ret == 0) + printf("Set node %s affinity to lcore %u\n", + node_tmp->name, worker_lcore); + } + } + + worker_lcore = main_lcore_id; + for (i = 0; i < worker_count; i++) { + worker_lcore = rte_get_next_lcore(worker_lcore, true, 1); + + qconf = &lcore_conf[worker_lcore]; + snprintf(qconf->name, sizeof(qconf->name), "cloned-%u", worker_lcore); + graph_id = rte_graph_clone(main_graph_id, qconf->name, &graph_conf); + ret = rte_graph_model_mcore_dispatch_core_bind(graph_id, worker_lcore); + if (ret == 0) + printf("bind graph %d to lcore %u\n", graph_id, worker_lcore); + + /* full cloned graph name */ + snprintf(qconf->name, sizeof(qconf->name), "%s", + rte_graph_id_to_name(graph_id)); + qconf->graph_id = graph_id; + qconf->graph = rte_graph_lookup(qconf->name); + if (!qconf->graph) + rte_exit(EXIT_FAILURE, + "Failed to lookup graph %s\n", + qconf->name); + continue; + } +} + +static void +graph_config_rtc(struct rte_graph_param graph_conf) +{ + uint16_t nb_patterns = graph_conf.nb_node_patterns; + struct lcore_conf *qconf; + rte_graph_t graph_id; + uint32_t lcore_id; + rte_edge_t i; + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + + qconf = &lcore_conf[lcore_id]; + /* Skip graph creation if no source exists */ + if (!qconf->n_rx_queue) + continue; + /* Add rx node patterns of this lcore */ + for (i = 0; i < qconf->n_rx_queue; i++) { + graph_conf.node_patterns[nb_patterns + i] = + qconf->rx_queue_list[i].node_name; + } + graph_conf.nb_node_patterns = nb_patterns + i; + graph_conf.socket_id = rte_lcore_to_socket_id(lcore_id); + snprintf(qconf->name, sizeof(qconf->name), "worker_%u", + lcore_id); + graph_id = rte_graph_create(qconf->name, &graph_conf); + if (graph_id == RTE_GRAPH_ID_INVALID) + rte_exit(EXIT_FAILURE, + "rte_graph_create(): graph_id invalid for lcore %u\n", + lcore_id); + qconf->graph_id = graph_id; + qconf->graph = rte_graph_lookup(qconf->name); + if (!qconf->graph) + rte_exit(EXIT_FAILURE, + "rte_graph_lookup(): graph %s not found\n", + qconf->name); + } +} + int main(int argc, char **argv) { @@ -840,6 +1023,9 @@ main(int argc, char **argv) if (check_lcore_params() < 0) rte_exit(EXIT_FAILURE, "check_lcore_params() failed\n"); + if (check_worker_model_params() < 0) + rte_exit(EXIT_FAILURE, "check_worker_model_params() failed\n"); + ret = init_lcore_rx_queues(); if (ret < 0) rte_exit(EXIT_FAILURE, "init_lcore_rx_queues() failed\n"); @@ -1079,51 +1265,20 @@ main(int argc, char **argv) memset(&graph_conf, 0, sizeof(graph_conf)); graph_conf.node_patterns = node_patterns; + graph_conf.nb_node_patterns = nb_patterns; /* Pcap config */ graph_conf.pcap_enable = pcap_trace_enable; graph_conf.num_pkt_to_capture = packet_to_capture; graph_conf.pcap_filename = pcap_filename; - for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { - rte_graph_t graph_id; - rte_edge_t i; - - if (rte_lcore_is_enabled(lcore_id) == 0) - continue; - - qconf = &lcore_conf[lcore_id]; - - /* Skip graph creation if no source exists */ - if (!qconf->n_rx_queue) - continue; - - /* Add rx node patterns of this lcore */ - for (i = 0; i < qconf->n_rx_queue; i++) { - graph_conf.node_patterns[nb_patterns + i] = - qconf->rx_queue_list[i].node_name; - } - - graph_conf.nb_node_patterns = nb_patterns + i; - graph_conf.socket_id = rte_lcore_to_socket_id(lcore_id); - - snprintf(qconf->name, sizeof(qconf->name), "worker_%u", - lcore_id); - - graph_id = rte_graph_create(qconf->name, &graph_conf); - if (graph_id == RTE_GRAPH_ID_INVALID) - rte_exit(EXIT_FAILURE, - "rte_graph_create(): graph_id invalid" - " for lcore %u\n", lcore_id); + if (model_conf == RTE_GRAPH_MODEL_MCORE_DISPATCH) + graph_config_mcore_dispatch(graph_conf); + else + graph_config_rtc(graph_conf); - qconf->graph_id = graph_id; - qconf->graph = rte_graph_lookup(qconf->name); - /* >8 End of graph initialization. */ - if (!qconf->graph) - rte_exit(EXIT_FAILURE, - "rte_graph_lookup(): graph %s not found\n", - qconf->name); - } + rte_graph_worker_model_set(model_conf); + /* >8 End of graph initialization. */ memset(&rewrite_data, 0, sizeof(rewrite_data)); rewrite_len = sizeof(rewrite_data); From patchwork Tue Jun 6 14:47:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 128222 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5C0D642C40; Tue, 6 Jun 2023 16:56:26 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7A47142D9C; Tue, 6 Jun 2023 16:55:41 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id DC0D542D9B for ; Tue, 6 Jun 2023 16:55:39 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686063340; x=1717599340; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/aH3uCFjZrEBviR84ETSn0XhAf1vbBn6Wq86MtM8wd0=; b=SMbXSPtvgeGl+gY4askotfu0GJgDFBMu5IuVJc/6ma/WDdtEdlOTyEbx 258EqE+hDmCOkeNvu3CEx2nq9EdB4M7x/H2JPw8jc9fkQVOq1MHoS8u43 L0ysawc2CbV+iGgfkv3GMohKJjFL6ByvfzLtQO++aC1YvtSWr/sBvWlR3 1rLB+o1Tti0CkKRl+fGjmK6qx4vyMBNUwraY9ayp2LfG0E9Rb+U7hW5hg Ee3skns0zo7dqQaT9rSwhK7TpNWTK6m7sxsR58WcaK1HvwGndIzPLmMfD VBehQGI/iM3Ptq+WPNCNMcCecDelEQncqBeD80GkauQs0+jlDgXYROLRG w==; X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="356705723" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="356705723" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2023 07:55:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="1039223027" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="1039223027" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.94]) by fmsmga005.fm.intel.com with ESMTP; 06 Jun 2023 07:55:28 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com, stephen@networkplumber.org, pbhagavatula@marvell.com, jerinjacobk@gmail.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, mattias.ronnblom@ericsson.com, Zhirun Yan Subject: [PATCH v8 16/17] test/graph: add functional tests for mcore dispatch model Date: Tue, 6 Jun 2023 22:47:45 +0800 Message-Id: <20230606144746.708388-17-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230606144746.708388-1-zhirun.yan@intel.com> References: <20230605111923.3772260-1-zhirun.yan@intel.com> <20230606144746.708388-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add functional test for mcore dispatch model including graph clone, graph model set/get, node worker affinity, graph worker binding/unbinding. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- app/test/test_graph.c | 130 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 130 insertions(+) diff --git a/app/test/test_graph.c b/app/test/test_graph.c index 1a2d1e6fab..8609c0b3a4 100644 --- a/app/test/test_graph.c +++ b/app/test/test_graph.c @@ -660,6 +660,132 @@ test_create_graph(void) return 0; } +static int +test_graph_clone(void) +{ + rte_graph_t cloned_graph_id = RTE_GRAPH_ID_INVALID; + rte_graph_t main_graph_id = RTE_GRAPH_ID_INVALID; + struct rte_graph_param graph_conf; + int ret = 0; + + main_graph_id = rte_graph_from_name("worker0"); + if (main_graph_id == RTE_GRAPH_ID_INVALID) { + printf("Must create main graph first\n"); + ret = -1; + } + + graph_conf.dispatch.mp_capacity = 1024; + graph_conf.dispatch.wq_size_max = 32; + + cloned_graph_id = rte_graph_clone(main_graph_id, "cloned-test0", &graph_conf); + + if (cloned_graph_id == RTE_GRAPH_ID_INVALID) { + printf("Graph creation failed with error = %d\n", rte_errno); + ret = -1; + } + + if (strcmp(rte_graph_id_to_name(cloned_graph_id), "worker0-cloned-test0")) { + printf("Cloned graph should name as %s but get %s\n", "worker0-cloned-test", + rte_graph_id_to_name(cloned_graph_id)); + ret = -1; + } + + rte_graph_destroy(cloned_graph_id); + + return ret; +} + +static int +test_graph_model_mcore_dispatch_node_lcore_affinity_set(void) +{ + rte_graph_t cloned_graph_id = RTE_GRAPH_ID_INVALID; + unsigned int worker_lcore = RTE_MAX_LCORE; + rte_node_t nid = RTE_NODE_ID_INVALID; + char node_name[64] = "test_node00"; + struct rte_node *node; + int ret = 0; + + worker_lcore = rte_get_next_lcore(worker_lcore, true, 1); + ret = rte_graph_model_mcore_dispatch_node_lcore_affinity_set(node_name, worker_lcore); + if (ret == 0) + printf("Set node %s affinity to lcore %u\n", node_name, worker_lcore); + + nid = rte_node_from_name(node_name); + cloned_graph_id = rte_graph_clone(graph_id, "cloned-test1", NULL); + node = rte_graph_node_get(cloned_graph_id, nid); + + if (node->dispatch.lcore_id != worker_lcore) { + printf("set node affinity failed\n"); + ret = -1; + } + + rte_graph_destroy(cloned_graph_id); + + return ret; +} + +static int +test_graph_model_mcore_dispatch_core_bind_unbind(void) +{ + rte_graph_t cloned_graph_id = RTE_GRAPH_ID_INVALID; + unsigned int worker_lcore = RTE_MAX_LCORE; + struct rte_graph *graph; + int ret = 0; + + worker_lcore = rte_get_next_lcore(worker_lcore, true, 1); + cloned_graph_id = rte_graph_clone(graph_id, "cloned-test2", NULL); + + ret = rte_graph_model_mcore_dispatch_core_bind(cloned_graph_id, worker_lcore); + if (ret != 0) { + printf("bind graph %d to lcore %u failed\n", graph_id, worker_lcore); + ret = -1; + } + + graph = rte_graph_lookup("worker0-cloned-test2"); + + if (graph->dispatch.lcore_id != worker_lcore) { + printf("bind graph %s(id:%d) with lcore %u failed\n", + graph->name, graph->id, worker_lcore); + ret = -1; + } + + rte_graph_model_mcore_dispatch_core_unbind(cloned_graph_id); + if (graph->dispatch.lcore_id != RTE_MAX_LCORE) { + printf("unbind graph %s(id:%d) failed %d\n", + graph->name, graph->id, graph->dispatch.lcore_id); + ret = -1; + } + + rte_graph_destroy(cloned_graph_id); + + return ret; +} + +static int +test_graph_worker_model_set_get(void) +{ + rte_graph_t cloned_graph_id = RTE_GRAPH_ID_INVALID; + struct rte_graph *graph; + int ret = 0; + + cloned_graph_id = rte_graph_clone(graph_id, "cloned-test3", NULL); + ret = rte_graph_worker_model_set(RTE_GRAPH_MODEL_MCORE_DISPATCH); + if (ret != 0) { + printf("Set graph mcore dispatch model failed\n"); + ret = -1; + } + + graph = rte_graph_lookup("worker0-cloned-test3"); + if (rte_graph_worker_model_get(graph) != RTE_GRAPH_MODEL_MCORE_DISPATCH) { + printf("Get graph worker model failed\n"); + ret = -1; + } + + rte_graph_destroy(cloned_graph_id); + + return 0; +} + static int test_graph_walk(void) { @@ -837,6 +963,10 @@ static struct unit_test_suite graph_testsuite = { TEST_CASE(test_update_edges), TEST_CASE(test_lookup_functions), TEST_CASE(test_create_graph), + TEST_CASE(test_graph_clone), + TEST_CASE(test_graph_model_mcore_dispatch_node_lcore_affinity_set), + TEST_CASE(test_graph_model_mcore_dispatch_core_bind_unbind), + TEST_CASE(test_graph_worker_model_set_get), TEST_CASE(test_graph_lookup_functions), TEST_CASE(test_graph_walk), TEST_CASE(test_print_stats), From patchwork Tue Jun 6 14:47:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 128223 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E39BD42C40; Tue, 6 Jun 2023 16:56:31 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id ADA3542D85; Tue, 6 Jun 2023 16:55:43 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 99EBB42DA0 for ; Tue, 6 Jun 2023 16:55:41 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686063341; x=1717599341; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lBgqkLshrZe6BW13IiXo6i5LJx9AehXaxAh6DNj9ZFk=; b=XLYWYoyyXMnQxK8cfvi5j9lek4QrTfA/OtdrOmYcNGYJnDqYgKb3/geI vz/hpGvG9GU37gKLj3M1dXfMZtmrt+jvgoq2bm+b5kY+A+ddJhrKJabJT jYWtsyO9L28BkFbvjPHwf351Ox7VIdtKsqTZKEZMKAbUB3MU1qEDZ8oSm hmHhxLSDgl3zC4p2/KZotVfz4Z/B6N2iJNHNIvub6u8GaY8QN9e0ULAox vAaBBH4WU6aMrOL/cDxTJh+LwuvoQ00HBKiZaDsYGBS5Hmi7AnnS3a2B6 FuMvq1ugqI0DCgs11zeAEizdWOeNcu3b13BdVgXjGS+fG3W80RLwbBttK g==; X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="356705763" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="356705763" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2023 07:55:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10733"; a="1039223064" X-IronPort-AV: E=Sophos;i="6.00,221,1681196400"; d="scan'208";a="1039223064" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.94]) by fmsmga005.fm.intel.com with ESMTP; 06 Jun 2023 07:55:31 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com, stephen@networkplumber.org, pbhagavatula@marvell.com, jerinjacobk@gmail.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, mattias.ronnblom@ericsson.com, Zhirun Yan Subject: [PATCH v8 17/17] doc: update multicore dispatch model in graph guides Date: Tue, 6 Jun 2023 22:47:46 +0800 Message-Id: <20230606144746.708388-18-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230606144746.708388-1-zhirun.yan@intel.com> References: <20230605111923.3772260-1-zhirun.yan@intel.com> <20230606144746.708388-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Update graph documentation to introduce new multicore dispatch model. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- doc/guides/prog_guide/graph_lib.rst | 68 +++++++++++++++++++++++++++-- 1 file changed, 64 insertions(+), 4 deletions(-) diff --git a/doc/guides/prog_guide/graph_lib.rst b/doc/guides/prog_guide/graph_lib.rst index 1cfdc86433..aecf3d2e03 100644 --- a/doc/guides/prog_guide/graph_lib.rst +++ b/doc/guides/prog_guide/graph_lib.rst @@ -189,14 +189,74 @@ In the above example, A graph object will be created with ethdev Rx node of port 0 and queue 0, all ipv4* nodes in the system, and ethdev tx node of all ports. -Multicore graph processing -~~~~~~~~~~~~~~~~~~~~~~~~~~ -In the current graph library implementation, specifically, -``rte_graph_walk()`` and ``rte_node_enqueue*`` fast path API functions +Graph models chossing +~~~~~~~~~~~~~~~~~~~~~ +Currently, there are 2 different walking models. Use macro +RTE_GRAPH_MODEL_SELECT to set the model in compile time. Also offers the +ability to choose models in runtime. +For application, must #define RTE_GRAPH_MODEL_SELECT before inlcuding +rte_graph_worker.h. + +In l3fwd-graph, set RTE_GRAPH_MODEL_SELECT as the model explicitly for +performance-sensitive use case. +Or set the macro as GRAPH_MODEL_MCORE_RUNTIME_SELECT. And parse +``"--model=NAME"`` in cmdline and use ``rte_graph_worker_model_set()`` +to set the walking model in runtime. + +RTC (Run-To-Completion) +^^^^^^^^^^^^^^^^^^^^^^^ +This is the default graph walking model. Specifically, +``rte_graph_walk_rtc()`` and ``rte_node_enqueue*`` fast path API functions are designed to work on single-core to have better performance. The fast path API works on graph object, So the multi-core graph processing strategy would be to create graph object PER WORKER. +Example: + +Graph: node-0 -> node-1 -> node-2 @Core0. + +.. code-block:: diff + + + - - - - - - - - - - - - - - - - - - - - - + + ' Core #0 ' + ' ' + ' +--------+ +---------+ +--------+ ' + ' | Node-0 | --> | Node-1 | --> | Node-2 | ' + ' +--------+ +---------+ +--------+ ' + ' ' + + - - - - - - - - - - - - - - - - - - - - - + + +Dispatch model +^^^^^^^^^^^^^^ +The dispatch model enables a cross-core dispatching mechanism which employs +a scheduling work-queue to dispatch streams to other worker cores which +being associated with the destination node. + +Use ``rte_graph_model_mcore_dispatch_lcore_affinity_set()`` to set lcore affinity +with the node. +Each worker core will have a graph repetition. Use ``rte_graph_clone()`` to clone +graph for each worker and use``rte_graph_model_mcore_dispatch_core_bind()`` to +bind graph with the worker core. + +Example: + +Graph topo: node-0 -> Core1; node-1 -> node-2; node-2 -> node-3. +Config graph: node-0 @Core0; node-1/3 @Core1; node-2 @Core2. + +.. code-block:: diff + + + - - - - - -+ +- - - - - - - - - - - - - + + - - - - - -+ + ' Core #0 ' ' Core #1 ' ' Core #2 ' + ' ' ' ' ' ' + ' +--------+ ' ' +--------+ +--------+ ' ' +--------+ ' + ' | Node-0 | - - - ->| Node-1 | | Node-3 |<- - - - | Node-2 | ' + ' +--------+ ' ' +--------+ +--------+ ' ' +--------+ ' + ' ' ' | ' ' ^ ' + + - - - - - -+ +- - -|- - - - - - - - - - + + - - -|- - -+ + | | + + - - - - - - - - - - - - - - - - + + + In fast path ~~~~~~~~~~~~ Typical fast-path code looks like below, where the application