From patchwork Mon Dec 10 09:14:57 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jakub Grajciar -X (jgrajcia - PANTHEON TECH SRO at Cisco)" X-Patchwork-Id: 48616 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2A7A358CB; Mon, 10 Dec 2018 10:15:15 +0100 (CET) Received: from rcdn-iport-6.cisco.com (rcdn-iport-6.cisco.com [173.37.86.77]) by dpdk.org (Postfix) with ESMTP id D5E6C5699 for ; Mon, 10 Dec 2018 10:15:12 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=6135; q=dns/txt; s=iport; t=1544433312; x=1545642912; h=from:to:cc:subject:date:message-id:mime-version; bh=Y/phT4EbSggiDcsexKrW8tuJ9t6JO5QLZYkMU9qxQng=; b=FRBj84cekOtvMIWwXpj2toNIOLFjE4KvziSUWTfIMFAYH01Ysl0hpDLz cQoRlwyzOEXKXJzcNX4+skiMkhPFIQxumy0RK27VOhwYB3JG8MFVl0+zx JXBp3dyzTvYJx4C+yxY5ZCEe55Xhnnr3IhNQXwvEcr2CDMy9FOb3por+e M=; X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A0ADAABELg5c/51dJa1jGQEBAQEBAQEBAQEBAQcBAQEBAQGBUQQBAQEBAQsBggOBaCcKjAmMDYstjjEUgWYLAQGEbIMqIjQJDQEDAQECAQECbSiFalKBPg4FgyGBdQ2lcjOFQIRfh3JFg2oXgUA/gRGEUwGDJ4VzAok1l0AJhkiKfAsYkT0jCZhdAhEUgUY4gVZNIxU7gmyCJwwLjh4+AQExjDEBgR4BAQ X-IronPort-AV: E=Sophos;i="5.56,337,1539648000"; d="scan'208";a="493758734" Received: from rcdn-core-6.cisco.com ([173.37.93.157]) by rcdn-iport-6.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 10 Dec 2018 09:15:11 +0000 Received: from XCH-RCD-017.cisco.com (xch-rcd-017.cisco.com [173.37.102.27]) by rcdn-core-6.cisco.com (8.15.2/8.15.2) with ESMTPS id wBA9FBAq012892 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=FAIL) for ; Mon, 10 Dec 2018 09:15:11 GMT Received: from localhost.localdomain (10.61.208.62) by XCH-RCD-017.cisco.com (173.37.102.27) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 10 Dec 2018 03:15:09 -0600 From: Jakub Grajciar To: CC: Jakub Grajciar Date: Mon, 10 Dec 2018 10:14:57 +0100 Message-ID: <20181210091457.6031-1-jgrajcia@cisco.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 X-Originating-IP: [10.61.208.62] X-ClientProxiedBy: xch-rcd-014.cisco.com (173.37.102.24) To XCH-RCD-017.cisco.com (173.37.102.27) X-Outbound-SMTP-Client: 173.37.102.27, xch-rcd-017.cisco.com X-Outbound-Node: rcdn-core-6.cisco.com Subject: [dpdk-dev] [PATCH v3] eal_interrupts: add option for pending callback unregister X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" use case: if callback is used to receive message form socket, and the message received is disconnect/error, this callback needs to be unregistered, but cannot because it is still active. With this patch it is possible to mark the callback to be unregistered once the interrupt process is done with this interrupt source. Signed-off-by: Jakub Grajciar --- .../common/include/rte_interrupts.h | 30 +++++++ lib/librte_eal/linuxapp/eal/eal_interrupts.c | 85 ++++++++++++++++++- 2 files changed, 113 insertions(+), 2 deletions(-) diff --git a/lib/librte_eal/common/include/rte_interrupts.h b/lib/librte_eal/common/include/rte_interrupts.h index d751a6378..3946742ad 100644 --- a/lib/librte_eal/common/include/rte_interrupts.h +++ b/lib/librte_eal/common/include/rte_interrupts.h @@ -24,6 +24,13 @@ struct rte_intr_handle; /** Function to be registered for the specific interrupt */ typedef void (*rte_intr_callback_fn)(void *cb_arg); +/** + * Function to call after a callback is unregistered. + * Can be used to close fd and free cb_arg. + */ +typedef void (*rte_intr_unregister_callback_fn)(struct rte_intr_handle *intr_handle, + void *cb_arg); + #include "rte_eal_interrupts.h" /** @@ -61,6 +68,29 @@ int rte_intr_callback_register(const struct rte_intr_handle *intr_handle, int rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, rte_intr_callback_fn cb, void *cb_arg); +/** + * It unregisters the callback according to the specified interrupt handle, + * after it's no longer acive. Failes if source is not active. + * + * @param intr_handle + * pointer to the interrupt handle. + * @param cb + * callback address. + * @param cb_arg + * address of parameter for callback, (void *)-1 means to remove all + * registered which has the same callback address. + * @param ucb_fn + * callback to call before cb is unregistered (optional). + * can be used to close fd and free cb_arg. + * + * @return + * - On success, return the number of callback entities marked for remove. + * - On failure, a negative value. + */ +int rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle, + rte_intr_callback_fn cb_fn, void *cb_arg, + rte_intr_unregister_callback_fn ucb_fn); + /** * It enables the interrupt for the specified handle. * diff --git a/lib/librte_eal/linuxapp/eal/eal_interrupts.c b/lib/librte_eal/linuxapp/eal/eal_interrupts.c index cbac451e1..79ad5e8d7 100644 --- a/lib/librte_eal/linuxapp/eal/eal_interrupts.c +++ b/lib/librte_eal/linuxapp/eal/eal_interrupts.c @@ -76,6 +76,8 @@ struct rte_intr_callback { TAILQ_ENTRY(rte_intr_callback) next; rte_intr_callback_fn cb_fn; /**< callback address */ void *cb_arg; /**< parameter for callback */ + uint8_t pending_delete; /**< delete after callback is called */ + rte_intr_unregister_callback_fn ucb_fn; /**< fn to call before cb is deleted */ }; struct rte_intr_source { @@ -472,6 +474,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, } callback->cb_fn = cb; callback->cb_arg = cb_arg; + callback->pending_delete = 0; + callback->ucb_fn = NULL; rte_spinlock_lock(&intr_lock); @@ -518,6 +522,57 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, return ret; } +int +rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle, + rte_intr_callback_fn cb_fn, void *cb_arg, + rte_intr_unregister_callback_fn ucb_fn) +{ + int ret; + struct rte_intr_source *src; + struct rte_intr_callback *cb, *next; + + /* do parameter checking first */ + if (intr_handle == NULL || intr_handle->fd < 0) { + RTE_LOG(ERR, EAL, + "Unregistering with invalid input parameter\n"); + return -EINVAL; + } + + rte_spinlock_lock(&intr_lock); + + /* check if the insterrupt source for the fd is existent */ + TAILQ_FOREACH(src, &intr_sources, next) + if (src->intr_handle.fd == intr_handle->fd) + break; + + /* No interrupt source registered for the fd */ + if (src == NULL) { + ret = -ENOENT; + + /* only usable if the source is active */ + } else if (src->active == 0) { + ret = -EAGAIN; + + } else { + ret = 0; + + /* walk through the callbacks and mark all that match. */ + for (cb = TAILQ_FIRST(&src->callbacks); cb != NULL; cb = next) { + next = TAILQ_NEXT(cb, next); + if (cb->cb_fn == cb_fn && (cb_arg == (void *)-1 || + cb->cb_arg == cb_arg)) { + cb->pending_delete = 1; + cb->ucb_fn = ucb_fn; + ret++; + } + } + } + + rte_spinlock_unlock(&intr_lock); + + return ret; +} + int rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, rte_intr_callback_fn cb_fn, void *cb_arg) @@ -698,7 +753,7 @@ static int eal_intr_process_interrupts(struct epoll_event *events, int nfds) { bool call = false; - int n, bytes_read; + int n, bytes_read, rv; struct rte_intr_source *src; struct rte_intr_callback *cb, *next; union rte_intr_read_buffer buf; @@ -823,9 +878,35 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds) rte_spinlock_lock(&intr_lock); } } - /* we done with that interrupt source, release it. */ src->active = 0; + + rv = 0; + + /* check if any callback are supposed to be removed */ + for (cb = TAILQ_FIRST(&src->callbacks); cb != NULL; cb = next) { + next = TAILQ_NEXT(cb, next); + if (cb->pending_delete) { + TAILQ_REMOVE(&src->callbacks, cb, next); + if (cb->ucb_fn) + cb->ucb_fn(&src->intr_handle, cb->cb_arg); + free(cb); + rv++; + } + } + + /* all callbacks for that source are removed. */ + if (TAILQ_EMPTY(&src->callbacks)) { + TAILQ_REMOVE(&intr_sources, src, next); + free(src); + } + + /* notify the pipe fd waited by epoll_wait to rebuild the wait list */ + if (rv >= 0 && write(intr_pipe.writefd, "1", 1) < 0) { + rte_spinlock_unlock(&intr_lock); + return -EPIPE; + } + rte_spinlock_unlock(&intr_lock); }