From patchwork Mon Oct 2 10:58:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Van Haaren, Harry" X-Patchwork-Id: 132261 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5D63C426A1; Mon, 2 Oct 2023 12:58:50 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A37AA402DF; Mon, 2 Oct 2023 12:58:49 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id F2A084003C; Mon, 2 Oct 2023 12:58:47 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1696244328; x=1727780328; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1y6LRrikBdh0IqpuC6ss6GMZptkwAvWyKGgBmAdD1V4=; b=jHuJEl9leDhMFOe0w5+vamaCtXgCWC2RBVVX4iHes/tlhFva164BV7ZE krkzLGtl5TxR5kxv9QA1zt3knzl0x2wsQpqwOFF5e56hWHjMsjzoXvFaw ps0J/yUz1mdt/BafT0jopP53Wmaz+/FQeHsF+0kZz06QUo7/f6rUzO9mz 9+h6X+SF2sAlMw+9tulVELEfvG8lrJXuZ8oj+hpr7Ok46mbHHgHoWWPOy 9JRVqcXVnrufKmGHgNMIpfoXnEJhzCc+cEBBUpz12wpMe0u91UBiAfuN6 JNXfc7w+riMcTG8Ujn8lT3dJBdeiWbRcla7RNrfFJko8lDR2aPN51ur2J g==; X-IronPort-AV: E=McAfee;i="6600,9927,10850"; a="446772118" X-IronPort-AV: E=Sophos;i="6.03,194,1694761200"; d="scan'208";a="446772118" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Oct 2023 03:58:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,194,1694761200"; d="scan'208";a="1494281" Received: from silpixa00401454.ir.intel.com ([10.55.128.147]) by fmviesa001.fm.intel.com with ESMTP; 02 Oct 2023 03:58:46 -0700 From: Harry van Haaren To: dev@dpdk.org Cc: jerinj@marvell.com, Harry van Haaren , stable@dpdk.org, Bruce Richardson Subject: [PATCH v3 1/2] event/sw: fix ordering corruption with op release Date: Mon, 2 Oct 2023 11:58:35 +0100 Message-Id: <20231002105836.3055379-1-harry.van.haaren@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230914105852.82471-2-harry.van.haaren@intel.com> References: <20230914105852.82471-2-harry.van.haaren@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This commit changes the logic in the scheduler to always reset reorder-buffer (and QID/FID) entries when writing them. This avoids stale ROB/QID/FID data re-use, which previously caused ordering issues. Before this commit, release events left the history-list in an inconsistent state, and future events with op type of forward could be incorrectly reordered. There was a partial fix previously committed which is now being resolved for all cases in a more general way, hence the two fixlines here. Fixes: 2e516d18dc01 ("event/sw: fix events mis-identified as needing reorder") Fixes: 617995dfc5b2 ("event/sw: add scheduling logic") Cc: stable@dpdk.org Suggested-by: Bruce Richardson Signed-off-by: Harry van Haaren Acked-by: Bruce Richardson --- v3: - Fixup whitespace and line wrapping suggestions (Bruce) - Add Fixes lines (Bruce) - Cc stable, as this is a functionality bugfix - Including Ack from v2, as no significant code changes v2: - Rework fix to simpler suggestion (Bruce) - Respin patchset to "apply order" (Bruce) --- drivers/event/sw/sw_evdev_scheduler.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/drivers/event/sw/sw_evdev_scheduler.c b/drivers/event/sw/sw_evdev_scheduler.c index de6ed21643..cc652815e4 100644 --- a/drivers/event/sw/sw_evdev_scheduler.c +++ b/drivers/event/sw/sw_evdev_scheduler.c @@ -90,8 +90,10 @@ sw_schedule_atomic_to_cq(struct sw_evdev *sw, struct sw_qid * const qid, sw->cq_ring_space[cq]--; int head = (p->hist_head++ & (SW_PORT_HIST_LIST-1)); - p->hist_list[head].fid = flow_id; - p->hist_list[head].qid = qid_id; + p->hist_list[head] = (struct sw_hist_list_entry) { + .qid = qid_id, + .fid = flow_id, + }; p->stats.tx_pkts++; qid->stats.tx_pkts++; @@ -162,8 +164,10 @@ sw_schedule_parallel_to_cq(struct sw_evdev *sw, struct sw_qid * const qid, qid->stats.tx_pkts++; const int head = (p->hist_head & (SW_PORT_HIST_LIST-1)); - p->hist_list[head].fid = SW_HASH_FLOWID(qe->flow_id); - p->hist_list[head].qid = qid_id; + p->hist_list[head] = (struct sw_hist_list_entry) { + .qid = qid_id, + .fid = SW_HASH_FLOWID(qe->flow_id), + }; if (keep_order) rob_ring_dequeue(qid->reorder_buffer_freelist, @@ -419,7 +423,6 @@ __pull_port_lb(struct sw_evdev *sw, uint32_t port_id, int allow_reorder) struct reorder_buffer_entry *rob_entry = hist_entry->rob_entry; - hist_entry->rob_entry = NULL; /* Although fragmentation not currently * supported by eventdev API, we support it * here. Open: How do we alert the user that From patchwork Mon Oct 2 10:58:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Van Haaren, Harry" X-Patchwork-Id: 132262 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 933EC426A1; Mon, 2 Oct 2023 12:58:55 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B2EC3402F0; Mon, 2 Oct 2023 12:58:50 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id A888440294 for ; Mon, 2 Oct 2023 12:58:48 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1696244328; x=1727780328; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dMYiZeSa3Wo5IArHymHBOh5JuS2tuA7hj+55hOik8I0=; b=YLMVMWtkQaFkD4moXJgle2SBLdmY8q7pw/7rD2qv36UqFwfRlOL8lLxF 63CK6sifPmO+AgKenG0PCcn+FWOEB5vQ5G+XB11ieEgKD3PnjCXM78aAD DnHSE879nTzRgKnHl8J65vNMq5yKVWrzlSHPewPWGGNzIiG3OFi9S4C93 kvwyN07RLBzQR9e7Letj02De5UP1KJorwJmZtfp2GMBNZ5iAzuP9LsDAr CtRQCGAL0iZXEnVPBceWl1TJ3LYHS3t7gd8oLSWOCN63BouXrCx9YA9HR dtlJw9Hy7hI9M9nkcyo3CO+orUKvRiOpVhgxbmp+Ktu5iWHtRZO0WtSq6 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10850"; a="446772123" X-IronPort-AV: E=Sophos;i="6.03,194,1694761200"; d="scan'208";a="446772123" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Oct 2023 03:58:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,194,1694761200"; d="scan'208";a="1494285" Received: from silpixa00401454.ir.intel.com ([10.55.128.147]) by fmviesa001.fm.intel.com with ESMTP; 02 Oct 2023 03:58:48 -0700 From: Harry van Haaren To: dev@dpdk.org Cc: jerinj@marvell.com, Harry van Haaren , Bruce Richardson Subject: [PATCH v3 2/2] event/sw: add selftest for ordered history list Date: Mon, 2 Oct 2023 11:58:36 +0100 Message-Id: <20231002105836.3055379-2-harry.van.haaren@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231002105836.3055379-1-harry.van.haaren@intel.com> References: <20230914105852.82471-2-harry.van.haaren@intel.com> <20231002105836.3055379-1-harry.van.haaren@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This commit adds a unit test for an issue identified where ordered history-list entries are not correctly cleared when the returned event is of op RELEASE type. The result of the history-list bug is that a future event which re-uses that history-list slot, but has an op type of FORWARD will incorrectly be reordered. The existing unit-tests did not cover the RELEASE of an ORDERED queue, and then stress-test the history-list by iterating HIST_LIST times afterwards. Signed-off-by: Harry van Haaren Acked-by: Bruce Richardson --- v3: - Including Ack from v2 --- drivers/event/sw/sw_evdev_selftest.c | 132 +++++++++++++++++++++++++++ 1 file changed, 132 insertions(+) diff --git a/drivers/event/sw/sw_evdev_selftest.c b/drivers/event/sw/sw_evdev_selftest.c index 3aa8d76ca8..59afa260c6 100644 --- a/drivers/event/sw/sw_evdev_selftest.c +++ b/drivers/event/sw/sw_evdev_selftest.c @@ -2959,6 +2959,132 @@ dev_stop_flush(struct test *t) /* test to check we can properly flush events */ return -1; } +static int +ordered_atomic_hist_completion(struct test *t) +{ + const int rx_enq = 0; + int err; + + /* Create instance with 1 atomic QID going to 3 ports + 1 prod port */ + if (init(t, 2, 2) < 0 || + create_ports(t, 2) < 0 || + create_ordered_qids(t, 1) < 0 || + create_atomic_qids(t, 1) < 0) + return -1; + + /* Helpers to identify queues */ + const uint8_t qid_ordered = t->qid[0]; + const uint8_t qid_atomic = t->qid[1]; + + /* CQ mapping to QID */ + if (rte_event_port_link(evdev, t->port[1], &t->qid[0], NULL, 1) != 1) { + printf("%d: error mapping port 1 qid\n", __LINE__); + return -1; + } + if (rte_event_port_link(evdev, t->port[1], &t->qid[1], NULL, 1) != 1) { + printf("%d: error mapping port 1 qid\n", __LINE__); + return -1; + } + if (rte_event_dev_start(evdev) < 0) { + printf("%d: Error with start call\n", __LINE__); + return -1; + } + + /* Enqueue 1x ordered event, to be RELEASE-ed by the worker + * CPU, which may cause hist-list corruption (by not comleting) + */ + struct rte_event ord_ev = { + .op = RTE_EVENT_OP_NEW, + .queue_id = qid_ordered, + .event_type = RTE_EVENT_TYPE_CPU, + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + }; + err = rte_event_enqueue_burst(evdev, t->port[rx_enq], &ord_ev, 1); + if (err != 1) { + printf("%d: Failed to enqueue\n", __LINE__); + return -1; + } + + /* call the scheduler. This schedules the above event as a single + * event in an ORDERED queue, to the worker. + */ + rte_service_run_iter_on_app_lcore(t->service_id, 1); + + /* Dequeue ORDERED event 0 from port 1, so that we can then drop */ + struct rte_event ev; + if (!rte_event_dequeue_burst(evdev, t->port[1], &ev, 1, 0)) { + printf("%d: failed to dequeue\n", __LINE__); + return -1; + } + + /* drop the ORDERED event. Here the history list should be completed, + * but might not be if the hist-list bug exists. Call scheduler to make + * it act on the RELEASE that was enqueued. + */ + rte_event_enqueue_burst(evdev, t->port[1], &release_ev, 1); + rte_service_run_iter_on_app_lcore(t->service_id, 1); + + /* Enqueue 1x atomic event, to then FORWARD to trigger atomic hist-list + * completion. If the bug exists, the ORDERED entry may be completed in + * error (aka, using the ORDERED-ROB for the ATOMIC event). This is the + * main focus of this unit test. + */ + { + struct rte_event ev = { + .op = RTE_EVENT_OP_NEW, + .queue_id = qid_atomic, + .event_type = RTE_EVENT_TYPE_CPU, + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + .flow_id = 123, + }; + + err = rte_event_enqueue_burst(evdev, t->port[rx_enq], &ev, 1); + if (err != 1) { + printf("%d: Failed to enqueue\n", __LINE__); + return -1; + } + } + rte_service_run_iter_on_app_lcore(t->service_id, 1); + + /* Deq ATM event, then forward it for more than HIST_LIST_SIZE times, + * to re-use the history list entry that may be corrupted previously. + */ + for (int i = 0; i < SW_PORT_HIST_LIST + 2; i++) { + if (!rte_event_dequeue_burst(evdev, t->port[1], &ev, 1, 0)) { + printf("%d: failed to dequeue, did corrupt ORD hist " + "list steal this ATM event?\n", __LINE__); + return -1; + } + + /* Re-enqueue the ATM event as FWD, trigger hist-list. */ + ev.op = RTE_EVENT_OP_FORWARD; + err = rte_event_enqueue_burst(evdev, t->port[1], &ev, 1); + if (err != 1) { + printf("%d: Failed to enqueue\n", __LINE__); + return -1; + } + + rte_service_run_iter_on_app_lcore(t->service_id, 1); + } + + /* If HIST-LIST + N count of dequeues succeed above, the hist list + * has not been corrupted. If it is corrupted, the ATM event is pushed + * into the ORDERED-ROB and will not dequeue. + */ + + /* release the ATM event that's been forwarded HIST_LIST times */ + err = rte_event_enqueue_burst(evdev, t->port[1], &release_ev, 1); + if (err != 1) { + printf("%d: Failed to enqueue\n", __LINE__); + return -1; + } + + rte_service_run_iter_on_app_lcore(t->service_id, 1); + + cleanup(t); + return 0; +} + static int worker_loopback_worker_fn(void *arg) { @@ -3388,6 +3514,12 @@ test_sw_eventdev(void) printf("ERROR - Stop Flush test FAILED.\n"); goto test_fail; } + printf("*** Running Ordered & Atomic hist-list completion test...\n"); + ret = ordered_atomic_hist_completion(t); + if (ret != 0) { + printf("ERROR - Ordered & Atomic hist-list test FAILED.\n"); + goto test_fail; + } if (rte_lcore_count() >= 3) { printf("*** Running Worker loopback test...\n"); ret = worker_loopback(t, 0);