From patchwork Fri Sep 3 12:40:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 97931 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2D779A0C54; Fri, 3 Sep 2021 14:41:58 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 65F0C410E0; Fri, 3 Sep 2021 14:41:55 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id CF0F2410E0 for ; Fri, 3 Sep 2021 14:41:53 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 1837tMpr012027; Fri, 3 Sep 2021 05:41:53 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=tfSbIjLeeK2YpqcZ78z8Go0tUc1pNPc+b1kfQx2prx0=; b=gxaHmmktxT0Ci8doeteDo5y3qaPUDScVjJCpALeLAFh29lfvU46bwQiKyS5F8K1nlG0J n3+kRf5wGDWnLIEEPAAU70qGZXZW6V8fK7k4AzztlsZfnJ9zS7ztztPEaZY0GbMVWAmL a4EU405HRrnm/v1hI2DgCHPSguFGIA/LymkrHSZiVeXj42VRe19m4ENbalgRaLHJHa/L Op4O9naCocfwpyXBjWKeEBP4anayLt8K68ZK28gn1gyhVmiI/e31O5pUPSk0gNdQDKas UDcOUKGaoih0NKWJgORnq9GiKLSyWeAmqSkwsh7pRyCHLDfcBsPM0B3P8SfxuuHDYjxr yQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 3aufr890tv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 03 Sep 2021 05:41:52 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 3 Sep 2021 05:41:50 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 3 Sep 2021 05:41:50 -0700 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 7371E5B6942; Fri, 3 Sep 2021 05:41:49 -0700 (PDT) From: Harman Kalra To: , Thomas Monjalon , Harman Kalra Date: Fri, 3 Sep 2021 18:10:56 +0530 Message-ID: <20210903124102.47425-2-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20210903124102.47425-1-hkalra@marvell.com> References: <20210826145726.102081-1-hkalra@marvell.com> <20210903124102.47425-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: S_47nRvYdtcFwncd4BNkyjHpjpLapRPZ X-Proofpoint-GUID: S_47nRvYdtcFwncd4BNkyjHpjpLapRPZ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-03_03,2021-09-03_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH v1 1/7] eal: interrupt handle API prototypes X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Defining protypes of get/set APIs for accessing/manipulating interrupt handle fields. Internal interrupt header i.e. rte_eal_interrupt.h is rearranged, as APIs defined are moved to rte_interrupts.h and epoll specific definitions are moved to a new header rte_epoll.h. Later in the series rte_eal_interrupt.h will be removed. Signed-off-by: Harman Kalra Acked-by: Ray Kinsella --- MAINTAINERS | 1 + lib/eal/include/meson.build | 1 + lib/eal/include/rte_eal_interrupts.h | 201 --------- lib/eal/include/rte_epoll.h | 116 +++++ lib/eal/include/rte_interrupts.h | 653 ++++++++++++++++++++++++++- 5 files changed, 769 insertions(+), 203 deletions(-) create mode 100644 lib/eal/include/rte_epoll.h diff --git a/MAINTAINERS b/MAINTAINERS index 266f5ac1da..53b092f532 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -208,6 +208,7 @@ F: app/test/test_memzone.c Interrupt Subsystem M: Harman Kalra +F: lib/eal/include/rte_epoll.h F: lib/eal/*/*interrupts.* F: app/test/test_interrupts.c diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build index 88a9eba12f..8e258607b8 100644 --- a/lib/eal/include/meson.build +++ b/lib/eal/include/meson.build @@ -19,6 +19,7 @@ headers += files( 'rte_eal_memconfig.h', 'rte_eal_trace.h', 'rte_errno.h', + 'rte_epoll.h', 'rte_fbarray.h', 'rte_hexdump.h', 'rte_hypervisor.h', diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h index 00bcc19b6d..68ca3a042d 100644 --- a/lib/eal/include/rte_eal_interrupts.h +++ b/lib/eal/include/rte_eal_interrupts.h @@ -39,32 +39,6 @@ enum rte_intr_handle_type { RTE_INTR_HANDLE_MAX /**< count of elements */ }; -#define RTE_INTR_EVENT_ADD 1UL -#define RTE_INTR_EVENT_DEL 2UL - -typedef void (*rte_intr_event_cb_t)(int fd, void *arg); - -struct rte_epoll_data { - uint32_t event; /**< event type */ - void *data; /**< User data */ - rte_intr_event_cb_t cb_fun; /**< IN: callback fun */ - void *cb_arg; /**< IN: callback arg */ -}; - -enum { - RTE_EPOLL_INVALID = 0, - RTE_EPOLL_VALID, - RTE_EPOLL_EXEC, -}; - -/** interrupt epoll event obj, taken by epoll_event.ptr */ -struct rte_epoll_event { - uint32_t status; /**< OUT: event status */ - int fd; /**< OUT: event fd */ - int epfd; /**< OUT: epoll instance the ev associated with */ - struct rte_epoll_data epdata; -}; - /** Handle for interrupts. */ struct rte_intr_handle { RTE_STD_C11 @@ -91,179 +65,4 @@ struct rte_intr_handle { int *intr_vec; /**< intr vector number array */ }; -#define RTE_EPOLL_PER_THREAD -1 /**< to hint using per thread epfd */ - -/** - * It waits for events on the epoll instance. - * Retries if signal received. - * - * @param epfd - * Epoll instance fd on which the caller wait for events. - * @param events - * Memory area contains the events that will be available for the caller. - * @param maxevents - * Up to maxevents are returned, must greater than zero. - * @param timeout - * Specifying a timeout of -1 causes a block indefinitely. - * Specifying a timeout equal to zero cause to return immediately. - * @return - * - On success, returns the number of available event. - * - On failure, a negative value. - */ -int -rte_epoll_wait(int epfd, struct rte_epoll_event *events, - int maxevents, int timeout); - -/** - * It waits for events on the epoll instance. - * Does not retry if signal received. - * - * @param epfd - * Epoll instance fd on which the caller wait for events. - * @param events - * Memory area contains the events that will be available for the caller. - * @param maxevents - * Up to maxevents are returned, must greater than zero. - * @param timeout - * Specifying a timeout of -1 causes a block indefinitely. - * Specifying a timeout equal to zero cause to return immediately. - * @return - * - On success, returns the number of available event. - * - On failure, a negative value. - */ -__rte_experimental -int -rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events, - int maxevents, int timeout); - -/** - * It performs control operations on epoll instance referred by the epfd. - * It requests that the operation op be performed for the target fd. - * - * @param epfd - * Epoll instance fd on which the caller perform control operations. - * @param op - * The operation be performed for the target fd. - * @param fd - * The target fd on which the control ops perform. - * @param event - * Describes the object linked to the fd. - * Note: The caller must take care the object deletion after CTL_DEL. - * @return - * - On success, zero. - * - On failure, a negative value. - */ -int -rte_epoll_ctl(int epfd, int op, int fd, - struct rte_epoll_event *event); - -/** - * The function returns the per thread epoll instance. - * - * @return - * epfd the epoll instance referred to. - */ -int -rte_intr_tls_epfd(void); - -/** - * @param intr_handle - * Pointer to the interrupt handle. - * @param epfd - * Epoll instance fd which the intr vector associated to. - * @param op - * The operation be performed for the vector. - * Operation type of {ADD, DEL}. - * @param vec - * RX intr vector number added to the epoll instance wait list. - * @param data - * User raw data. - * @return - * - On success, zero. - * - On failure, a negative value. - */ -int -rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, - int epfd, int op, unsigned int vec, void *data); - -/** - * It deletes registered eventfds. - * - * @param intr_handle - * Pointer to the interrupt handle. - */ -void -rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle); - -/** - * It enables the packet I/O interrupt event if it's necessary. - * It creates event fd for each interrupt vector when MSIX is used, - * otherwise it multiplexes a single event fd. - * - * @param intr_handle - * Pointer to the interrupt handle. - * @param nb_efd - * Number of interrupt vector trying to enable. - * The value 0 is not allowed. - * @return - * - On success, zero. - * - On failure, a negative value. - */ -int -rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd); - -/** - * It disables the packet I/O interrupt event. - * It deletes registered eventfds and closes the open fds. - * - * @param intr_handle - * Pointer to the interrupt handle. - */ -void -rte_intr_efd_disable(struct rte_intr_handle *intr_handle); - -/** - * The packet I/O interrupt on datapath is enabled or not. - * - * @param intr_handle - * Pointer to the interrupt handle. - */ -int -rte_intr_dp_is_en(struct rte_intr_handle *intr_handle); - -/** - * The interrupt handle instance allows other causes or not. - * Other causes stand for any none packet I/O interrupts. - * - * @param intr_handle - * Pointer to the interrupt handle. - */ -int -rte_intr_allow_others(struct rte_intr_handle *intr_handle); - -/** - * The multiple interrupt vector capability of interrupt handle instance. - * It returns zero if no multiple interrupt vector support. - * - * @param intr_handle - * Pointer to the interrupt handle. - */ -int -rte_intr_cap_multiple(struct rte_intr_handle *intr_handle); - -/** - * @warning - * @b EXPERIMENTAL: this API may change without prior notice - * - * @internal - * Check if currently executing in interrupt context - * - * @return - * - non zero in case of interrupt context - * - zero in case of process context - */ -__rte_experimental -int -rte_thread_is_intr(void); - #endif /* _RTE_EAL_INTERRUPTS_H_ */ diff --git a/lib/eal/include/rte_epoll.h b/lib/eal/include/rte_epoll.h new file mode 100644 index 0000000000..182353cfd4 --- /dev/null +++ b/lib/eal/include/rte_epoll.h @@ -0,0 +1,116 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell International Ltd. + */ + +#ifndef __RTE_EPOLL_H__ +#define __RTE_EPOLL_H__ + +/** + * @file + * The rte_epoll provides interfaces functions to add delete events, + * wait poll for an event. + */ + +#include + +#ifdef __cplusplus +extern "C" { +#endif + +#define RTE_INTR_EVENT_ADD 1UL +#define RTE_INTR_EVENT_DEL 2UL + +typedef void (*rte_intr_event_cb_t)(int fd, void *arg); + +struct rte_epoll_data { + uint32_t event; /**< event type */ + void *data; /**< User data */ + rte_intr_event_cb_t cb_fun; /**< IN: callback fun */ + void *cb_arg; /**< IN: callback arg */ +}; + +enum { + RTE_EPOLL_INVALID = 0, + RTE_EPOLL_VALID, + RTE_EPOLL_EXEC, +}; + +/** interrupt epoll event obj, taken by epoll_event.ptr */ +struct rte_epoll_event { + uint32_t status; /**< OUT: event status */ + int fd; /**< OUT: event fd */ + int epfd; /**< OUT: epoll instance the ev associated with */ + struct rte_epoll_data epdata; +}; + +#define RTE_EPOLL_PER_THREAD -1 /**< to hint using per thread epfd */ + +/** + * It waits for events on the epoll instance. + * Retries if signal received. + * + * @param epfd + * Epoll instance fd on which the caller wait for events. + * @param events + * Memory area contains the events that will be available for the caller. + * @param maxevents + * Up to maxevents are returned, must greater than zero. + * @param timeout + * Specifying a timeout of -1 causes a block indefinitely. + * Specifying a timeout equal to zero cause to return immediately. + * @return + * - On success, returns the number of available event. + * - On failure, a negative value. + */ +int +rte_epoll_wait(int epfd, struct rte_epoll_event *events, + int maxevents, int timeout); + +/** + * It waits for events on the epoll instance. + * Does not retry if signal received. + * + * @param epfd + * Epoll instance fd on which the caller wait for events. + * @param events + * Memory area contains the events that will be available for the caller. + * @param maxevents + * Up to maxevents are returned, must greater than zero. + * @param timeout + * Specifying a timeout of -1 causes a block indefinitely. + * Specifying a timeout equal to zero cause to return immediately. + * @return + * - On success, returns the number of available event. + * - On failure, a negative value. + */ +__rte_experimental +int +rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events, + int maxevents, int timeout); + +/** + * It performs control operations on epoll instance referred by the epfd. + * It requests that the operation op be performed for the target fd. + * + * @param epfd + * Epoll instance fd on which the caller perform control operations. + * @param op + * The operation be performed for the target fd. + * @param fd + * The target fd on which the control ops perform. + * @param event + * Describes the object linked to the fd. + * Note: The caller must take care the object deletion after CTL_DEL. + * @return + * - On success, zero. + * - On failure, a negative value. + */ +int +rte_epoll_ctl(int epfd, int op, int fd, + struct rte_epoll_event *event); + +#ifdef __cplusplus +} +#endif + +#endif /* __RTE_EPOLL_H__ */ diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h index cc3bf45d8c..afc3262967 100644 --- a/lib/eal/include/rte_interrupts.h +++ b/lib/eal/include/rte_interrupts.h @@ -5,8 +5,11 @@ #ifndef _RTE_INTERRUPTS_H_ #define _RTE_INTERRUPTS_H_ +#include + #include #include +#include /** * @file @@ -22,6 +25,10 @@ extern "C" { /** Interrupt handle */ struct rte_intr_handle; +#define RTE_INTR_HANDLE_DEFAULT_SIZE 1 + +#include "rte_eal_interrupts.h" + /** Function to be registered for the specific interrupt */ typedef void (*rte_intr_callback_fn)(void *cb_arg); @@ -32,8 +39,6 @@ typedef void (*rte_intr_callback_fn)(void *cb_arg); typedef void (*rte_intr_unregister_callback_fn)(struct rte_intr_handle *intr_handle, void *cb_arg); -#include "rte_eal_interrupts.h" - /** * It registers the callback for the specific interrupt. Multiple * callbacks can be registered at the same time. @@ -163,6 +168,650 @@ int rte_intr_disable(const struct rte_intr_handle *intr_handle); __rte_experimental int rte_intr_ack(const struct rte_intr_handle *intr_handle); +/** + * The function returns the per thread epoll instance. + * + * @return + * epfd the epoll instance referred to. + */ +int +rte_intr_tls_epfd(void); + +/** + * @param intr_handle + * Pointer to the interrupt handle. + * @param epfd + * Epoll instance fd which the intr vector associated to. + * @param op + * The operation be performed for the vector. + * Operation type of {ADD, DEL}. + * @param vec + * RX intr vector number added to the epoll instance wait list. + * @param data + * User raw data. + * @return + * - On success, zero. + * - On failure, a negative value. + */ +int +rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, + int epfd, int op, unsigned int vec, void *data); + +/** + * It deletes registered eventfds. + * + * @param intr_handle + * Pointer to the interrupt handle. + */ +void +rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle); + +/** + * It enables the packet I/O interrupt event if it's necessary. + * It creates event fd for each interrupt vector when MSIX is used, + * otherwise it multiplexes a single event fd. + * + * @param intr_handle + * Pointer to the interrupt handle. + * @param nb_efd + * Number of interrupt vector trying to enable. + * The value 0 is not allowed. + * @return + * - On success, zero. + * - On failure, a negative value. + */ +int +rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd); + +/** + * It disables the packet I/O interrupt event. + * It deletes registered eventfds and closes the open fds. + * + * @param intr_handle + * Pointer to the interrupt handle. + */ +void +rte_intr_efd_disable(struct rte_intr_handle *intr_handle); + +/** + * The packet I/O interrupt on datapath is enabled or not. + * + * @param intr_handle + * Pointer to the interrupt handle. + */ +int +rte_intr_dp_is_en(struct rte_intr_handle *intr_handle); + +/** + * The interrupt handle instance allows other causes or not. + * Other causes stand for any none packet I/O interrupts. + * + * @param intr_handle + * Pointer to the interrupt handle. + */ +int +rte_intr_allow_others(struct rte_intr_handle *intr_handle); + +/** + * The multiple interrupt vector capability of interrupt handle instance. + * It returns zero if no multiple interrupt vector support. + * + * @param intr_handle + * Pointer to the interrupt handle. + */ +int +rte_intr_cap_multiple(struct rte_intr_handle *intr_handle); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * @internal + * Check if currently executing in interrupt context + * + * @return + * - non zero in case of interrupt context + * - zero in case of process context + */ +__rte_experimental +int +rte_thread_is_intr(void); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * It allocates memory for interrupt instances based on size provided by user + * i.e. whether a single handle or array of handles is defined by size. Memory + * to be allocated from a hugepage or normal allocation is also defined by user. + * Default memory allocation for event fds and event list array is done which + * can be realloced later as per the requirement. + * + * This function should be called from application or driver, before calling any + * of the interrupt APIs. + * + * @param size + * No of interrupt instances. + * @param from_hugepage + * Memory allocation from hugepage or normal allocation + * + * @return + * - On success, address of first interrupt handle. + * - On failure, NULL. + */ +__rte_experimental +struct rte_intr_handle *rte_intr_handle_instance_alloc(int size, + bool from_hugepage); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Returns the address of interrupt handle instance as per the index + * provided. + * + * @param intr_handle + * Base address of interrupt handle array. + * @param index + * Index of the interrupt handle + * + * @return + * - On success, address of interrupt handle at index + * - On failure, NULL. + */ +__rte_experimental +struct rte_intr_handle *rte_intr_handle_instance_index_get( + struct rte_intr_handle *intr_handle, int index); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * This API is used to free the memory allocated for event fds. event lists + * and interrupt handle array. + * + * @param intr_handle + * Base address of interrupt handle array. + * + */ +__rte_experimental +void rte_intr_handle_instance_free(struct rte_intr_handle *intr_handle); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * This API is used to populate interrupt handle at a given index of array + * of interrupt handles, with the values defined in src handler. + * + * @param intr_handle + * Start address of interrupt handles + * @param + * Source interrupt handle to be cloned. + * @param index + * Index of the interrupt handle + * + * @return + * - On success, zero. + * - On failure, a negative value. + */ +__rte_experimental +int rte_intr_handle_instance_index_set(struct rte_intr_handle *intr_handle, + const struct rte_intr_handle *src, + int index); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * This API is used to set the fd field of interrupt handle with user provided + * file descriptor. + * + * @param intr_handle + * pointer to the interrupt handle. + * @param fd + * file descriptor value provided by user. + * + * @return + * - On success, zero. + * - On failure, a negative value. + */ +__rte_experimental +int rte_intr_handle_fd_set(struct rte_intr_handle *intr_handle, int fd); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Returns the fd field of the given interrupt handle instance. + * + * @param intr_handle + * pointer to the interrupt handle. + * + * @return + * - On success, fd field. + * - On failure, a negative value. + */ +__rte_experimental +int rte_intr_handle_fd_get(const struct rte_intr_handle *intr_handle); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * This API is used to set the type field of interrupt handle with user provided + * interrupt type. + * + * @param intr_handle + * pointer to the interrupt handle. + * @param type + * interrupt type + * + * @return + * - On success, zero. + * - On failure, a negative value. + */ +__rte_experimental +int rte_intr_handle_type_set(struct rte_intr_handle *intr_handle, + enum rte_intr_handle_type type); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Returns the type field of the given interrupt handle instance. + * + * @param intr_handle + * pointer to the interrupt handle. + * + * @return + * - On success, interrupt type + * - On failure, RTE_INTR_HANDLE_UNKNOWN. + */ +__rte_experimental +enum rte_intr_handle_type rte_intr_handle_type_get( + const struct rte_intr_handle *intr_handle); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * This API is used to set the device fd field of interrupt handle with user + * provided dev fd. Device fd corresponds to VFIO device fd or UIO config fd. + * + * @param intr_handle + * pointer to the interrupt handle. + * @param fd + * interrupt type + * + * @return + * - On success, zero. + * - On failure, a negative value. + */ +__rte_experimental +int rte_intr_handle_dev_fd_set(struct rte_intr_handle *intr_handle, int fd); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Returns the device fd field of the given interrupt handle instance. + * + * @param intr_handle + * pointer to the interrupt handle. + * + * @return + * - On success, dev fd. + * - On failure, a negative value. + */ +__rte_experimental +int rte_intr_handle_dev_fd_get(const struct rte_intr_handle *intr_handle); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * This API is used to set the max intr field of interrupt handle with user + * provided max intr value. + * + * @param intr_handle + * pointer to the interrupt handle. + * @param max_intr + * interrupt type + * + * @return + * - On success, zero. + * - On failure, a negative value. + */ +__rte_experimental +int rte_intr_handle_max_intr_set(struct rte_intr_handle *intr_handle, + int max_intr); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Returns the max intr field of the given interrupt handle instance. + * + * @param intr_handle + * pointer to the interrupt handle. + * + * @return + * - On success, max intr. + * - On failure, a negative value. + */ +__rte_experimental +int rte_intr_handle_max_intr_get(const struct rte_intr_handle *intr_handle); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * This API is used to set the no of event fd field of interrupt handle with + * user provided available event file descriptor value. + * + * @param intr_handle + * pointer to the interrupt handle. + * @param nb_efd + * Available event fd + * + * @return + * - On success, zero. + * - On failure, a negative value. + */ +__rte_experimental +int rte_intr_handle_nb_efd_set(struct rte_intr_handle *intr_handle, int nb_efd); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Returns the no of available event fd field of the given interrupt handle + * instance. + * + * @param intr_handle + * pointer to the interrupt handle. + * + * @return + * - On success, nb_efd + * - On failure, a negative value. + */ +__rte_experimental +int rte_intr_handle_nb_efd_get(const struct rte_intr_handle *intr_handle); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Returns the no of interrupt vector field of the given interrupt handle + * instance. This field is to configured on device probe time, and based on + * this value efds and elist arrays are dynamically allocated. By default + * this value is set to RTE_MAX_RXTX_INTR_VEC_ID. + * For eg. in case of PCI device, its msix size is queried and efds/elist + * arrays are allocated accordingly. + * + * @param intr_handle + * pointer to the interrupt handle. + * + * @return + * - On success, nb_intr + * - On failure, a negative value. + */ +__rte_experimental +int rte_intr_handle_nb_intr_get(const struct rte_intr_handle *intr_handle); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * This API is used to set the event fd counter size field of interrupt handle + * with user provided efd counter size. + * + * @param intr_handle + * pointer to the interrupt handle. + * @param efd_counter_size + * size of efd counter, used for vdev + * + * @return + * - On success, zero. + * - On failure, a negative value. + */ +__rte_experimental +int rte_intr_handle_efd_counter_size_set(struct rte_intr_handle *intr_handle, + uint8_t efd_counter_size); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Returns the event fd counter size field of the given interrupt handle + * instance. + * + * @param intr_handle + * pointer to the interrupt handle. + * + * @return + * - On success, efd_counter_size + * - On failure, a negative value. + */ +__rte_experimental +int rte_intr_handle_efd_counter_size_get( + const struct rte_intr_handle *intr_handle); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Returns the base address of the event fds array field of given interrupt + * handle. + * + * @param intr_handle + * pointer to the interrupt handle. + * + * @return + * - On success, efds base address + * - On failure, a negative value. + */ +__rte_experimental +int *rte_intr_handle_efds_base(struct rte_intr_handle *intr_handle); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * This API is used to set the event fd array index with the given fd. + * + * @param intr_handle + * pointer to the interrupt handle. + * @param index + * efds array index to be set + * @param fd + * event fd + * + * @return + * - On success, zero. + * - On failure, a negative value. + */ +__rte_experimental +int rte_intr_handle_efds_index_set(struct rte_intr_handle *intr_handle, + int index, int fd); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Returns the fd value of event fds array at a given index. + * + * @param intr_handle + * pointer to the interrupt handle. + * @param index + * efds array index to be returned + * + * @return + * - On success, fd + * - On failure, a negative value. + */ +__rte_experimental +int rte_intr_handle_efds_index_get(const struct rte_intr_handle *intr_handle, + int index); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * This API is used to set the event list array index with the given elist + * instance. + * + * @param intr_handle + * pointer to the interrupt handle. + * @param index + * elist array index to be set + * @param elist + * event list instance of struct rte_epoll_event + * + * @return + * - On success, zero. + * - On failure, a negative value. + */ +__rte_experimental +int rte_intr_handle_elist_index_set(struct rte_intr_handle *intr_handle, + int index, struct rte_epoll_event elist); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Returns the address of elist instance of event list array at a given index. + * + * @param intr_handle + * pointer to the interrupt handle. + * @param index + * elist array index to be returned + * + * @return + * - On success, elist + * - On failure, a negative value. + */ +__rte_experimental +struct rte_epoll_event *rte_intr_handle_elist_index_get( + struct rte_intr_handle *intr_handle, int index); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Allocates the memory of interrupt vector list array, with size defining the + * no of elements required in the array. + * + * @param intr_handle + * pointer to the interrupt handle. + * @param name + * Name assigned to the allocation, or NULL. + * @param size + * No of element required in the array. + * + * @return + * - On success, zero + * - On failure, a negative value. + */ +__rte_experimental +int rte_intr_handle_vec_list_alloc(struct rte_intr_handle *intr_handle, + const char *name, int size); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Sets the vector value at given index of interrupt vector list field of given + * interrupt handle. + * + * @param intr_handle + * pointer to the interrupt handle. + * @param index + * intr_vec array index to be set + * @param vec + * Interrupt vector value. + * + * @return + * - On success, zero + * - On failure, a negative value. + */ +__rte_experimental +int rte_intr_handle_vec_list_index_set(struct rte_intr_handle *intr_handle, + int index, int vec); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Returns the vector value at the given index of interrupt vector list array. + * + * @param intr_handle + * pointer to the interrupt handle. + * @param index + * intr_vec array index to be returned + * + * @return + * - On success, interrupt vector + * - On failure, a negative value. + */ +__rte_experimental +int rte_intr_handle_vec_list_index_get( + const struct rte_intr_handle *intr_handle, int index); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Freeing the memory allocated for interrupt vector list array. + * + * @param intr_handle + * pointer to the interrupt handle. + * + * @return + * - On success, zero + * - On failure, a negative value. + */ +__rte_experimental +void rte_intr_handle_vec_list_free(struct rte_intr_handle *intr_handle); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Returns the base address of interrupt vector list array. + * + * @param intr_handle + * pointer to the interrupt handle. + * + * @return + * - On success, base address of intr_vec array + * - On failure, a negative value. + */ +__rte_experimental +int *rte_intr_handle_vec_list_base(const struct rte_intr_handle *intr_handle); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Reallocates the size efds and elist array based on size provided by user. + * By default efds and elist array are allocated with default size + * RTE_MAX_RXTX_INTR_VEC_ID on interrupt handle array creation. Later on device + * probe, device may have capability of more interrupts than + * RTE_MAX_RXTX_INTR_VEC_ID. Hence using this API, PMDs can reallocate the + * arrays as per the max interrupts capability of device. + * + * @param intr_handle + * pointer to the interrupt handle. + * @param size + * efds and elist array size. + * + * @return + * - On success, zero + * - On failure, a negative value. + */ +__rte_experimental +int rte_intr_handle_event_list_update(struct rte_intr_handle *intr_handle, + int size); #ifdef __cplusplus } #endif From patchwork Fri Sep 3 12:40:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 97932 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 66D46A0C54; Fri, 3 Sep 2021 14:42:06 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D4828410ED; Fri, 3 Sep 2021 14:42:01 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 95B25410ED for ; Fri, 3 Sep 2021 14:42:00 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 1837toED013313; Fri, 3 Sep 2021 05:41:57 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=8T0n7jU2DASdaNpRqrdytRLrBOpryrIslZZ0kpqjE6E=; b=lGm8uVqxFfFFNQWCXzYuW620rxd/dP0IBECqwEppwSCrp1WG4LfU2JG876/5DfC/5yHO LmFdEXqm+Dx2ArwSl+3QziHpqSJzVXdoDlUw/81yFLF2Kybr0vb4qMkN9bSX5M1a5Gp4 nk3+tGt7nUev6c2K86hX4gt4M/r3Juq2R2n+SOD+XDxsnVWLFTLZCv6+AtSfl6RXPls1 UFpwomGGIHw1UjrG1ifZ90L3yylVcmf3rrGbjlRTbhe+WZmaqc4t5hoTgsNmNVzyNdKq Ee1SS/wEyAFc1pv9t3DE5qhqp6/BmKm4oI/T4ihe+9mKW2J1rFMZdKkFeM6qH4s+t0mF +A== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 3aufr890ty-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 03 Sep 2021 05:41:57 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 3 Sep 2021 05:41:55 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 3 Sep 2021 05:41:55 -0700 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 75C635B6945; Fri, 3 Sep 2021 05:41:54 -0700 (PDT) From: Harman Kalra To: , Harman Kalra , Ray Kinsella Date: Fri, 3 Sep 2021 18:10:57 +0530 Message-ID: <20210903124102.47425-3-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20210903124102.47425-1-hkalra@marvell.com> References: <20210826145726.102081-1-hkalra@marvell.com> <20210903124102.47425-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: O93CsMUF43P734Gc_9KreSELLEnwzbLz X-Proofpoint-GUID: O93CsMUF43P734Gc_9KreSELLEnwzbLz X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-03_03,2021-09-03_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH v1 2/7] eal/interrupts: implement get set APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Implementing get set APIs for interrupt handle fields. To make any change to the interrupt handle fields, one should make use of these APIs. Signed-off-by: Harman Kalra Acked-by: Ray Kinsella --- lib/eal/common/eal_common_interrupts.c | 506 +++++++++++++++++++++++++ lib/eal/common/meson.build | 2 + lib/eal/include/rte_eal_interrupts.h | 6 +- lib/eal/version.map | 30 ++ 4 files changed, 543 insertions(+), 1 deletion(-) create mode 100644 lib/eal/common/eal_common_interrupts.c diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c new file mode 100644 index 0000000000..2e4fed96f0 --- /dev/null +++ b/lib/eal/common/eal_common_interrupts.c @@ -0,0 +1,506 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include +#include + +#include +#include +#include + +#include + + +struct rte_intr_handle *rte_intr_handle_instance_alloc(int size, + bool from_hugepage) +{ + struct rte_intr_handle *intr_handle; + int i; + + if (from_hugepage) + intr_handle = rte_zmalloc(NULL, + size * sizeof(struct rte_intr_handle), + 0); + else + intr_handle = calloc(1, size * sizeof(struct rte_intr_handle)); + if (!intr_handle) { + RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n"); + rte_errno = ENOMEM; + return NULL; + } + + for (i = 0; i < size; i++) { + intr_handle[i].nb_intr = RTE_MAX_RXTX_INTR_VEC_ID; + intr_handle[i].alloc_from_hugepage = from_hugepage; + } + + return intr_handle; +} + +struct rte_intr_handle *rte_intr_handle_instance_index_get( + struct rte_intr_handle *intr_handle, int index) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOMEM; + return NULL; + } + + return &intr_handle[index]; +} + +int rte_intr_handle_instance_index_set(struct rte_intr_handle *intr_handle, + const struct rte_intr_handle *src, + int index) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (src == NULL) { + RTE_LOG(ERR, EAL, "Source interrupt instance unallocated\n"); + rte_errno = EINVAL; + goto fail; + } + + if (index < 0) { + RTE_LOG(ERR, EAL, "Index cany be negative"); + rte_errno = EINVAL; + goto fail; + } + + intr_handle[index].fd = src->fd; + intr_handle[index].vfio_dev_fd = src->vfio_dev_fd; + intr_handle[index].type = src->type; + intr_handle[index].max_intr = src->max_intr; + intr_handle[index].nb_efd = src->nb_efd; + intr_handle[index].efd_counter_size = src->efd_counter_size; + + memcpy(intr_handle[index].efds, src->efds, src->nb_intr); + memcpy(intr_handle[index].elist, src->elist, src->nb_intr); + + return 0; +fail: + return rte_errno; +} + +void rte_intr_handle_instance_free(struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + } + + if (intr_handle->alloc_from_hugepage) + rte_free(intr_handle); + else + free(intr_handle); +} + +int rte_intr_handle_fd_set(struct rte_intr_handle *intr_handle, int fd) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + intr_handle->fd = fd; + + return 0; +fail: + return rte_errno; +} + +int rte_intr_handle_fd_get(const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + return intr_handle->fd; +fail: + return rte_errno; +} + +int rte_intr_handle_type_set(struct rte_intr_handle *intr_handle, + enum rte_intr_handle_type type) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + intr_handle->type = type; + + return 0; +fail: + return rte_errno; +} + +enum rte_intr_handle_type rte_intr_handle_type_get( + const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + return RTE_INTR_HANDLE_UNKNOWN; + } + + return intr_handle->type; +} + +int rte_intr_handle_dev_fd_set(struct rte_intr_handle *intr_handle, int fd) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + intr_handle->vfio_dev_fd = fd; + + return 0; +fail: + return rte_errno; +} + +int rte_intr_handle_dev_fd_get(const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + return intr_handle->vfio_dev_fd; +fail: + return rte_errno; +} + +int rte_intr_handle_max_intr_set(struct rte_intr_handle *intr_handle, + int max_intr) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (max_intr > intr_handle->nb_intr) { + RTE_LOG(ERR, EAL, "Max_intr=%d greater than PLT_MAX_RXTX_INTR_VEC_ID=%d", + max_intr, intr_handle->nb_intr); + rte_errno = ERANGE; + goto fail; + } + + intr_handle->max_intr = max_intr; + + return 0; +fail: + return rte_errno; +} + +int rte_intr_handle_max_intr_get(const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + return intr_handle->max_intr; +fail: + return rte_errno; +} + +int rte_intr_handle_nb_efd_set(struct rte_intr_handle *intr_handle, + int nb_efd) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + intr_handle->nb_efd = nb_efd; + + return 0; +fail: + return rte_errno; +} + +int rte_intr_handle_nb_efd_get(const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + return intr_handle->nb_efd; +fail: + return rte_errno; +} + +int rte_intr_handle_nb_intr_get(const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + return intr_handle->nb_intr; +fail: + return rte_errno; +} + +int rte_intr_handle_efd_counter_size_set(struct rte_intr_handle *intr_handle, + uint8_t efd_counter_size) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + intr_handle->efd_counter_size = efd_counter_size; + + return 0; +fail: + return rte_errno; +} + +int rte_intr_handle_efd_counter_size_get( + const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + return intr_handle->efd_counter_size; +fail: + return rte_errno; +} + +int *rte_intr_handle_efds_base(struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + return intr_handle->efds; +fail: + return NULL; +} + +int rte_intr_handle_efds_index_get(const struct rte_intr_handle *intr_handle, + int index) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (index >= intr_handle->nb_intr) { + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index, + intr_handle->nb_intr); + rte_errno = EINVAL; + goto fail; + } + + return intr_handle->efds[index]; +fail: + return rte_errno; +} + +int rte_intr_handle_efds_index_set(struct rte_intr_handle *intr_handle, + int index, int fd) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (index >= intr_handle->nb_intr) { + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index, + intr_handle->nb_intr); + rte_errno = ERANGE; + goto fail; + } + + intr_handle->efds[index] = fd; + + return 0; +fail: + return rte_errno; +} + +struct rte_epoll_event *rte_intr_handle_elist_index_get( + struct rte_intr_handle *intr_handle, int index) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (index >= intr_handle->nb_intr) { + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index, + intr_handle->nb_intr); + rte_errno = ERANGE; + goto fail; + } + + return &intr_handle->elist[index]; +fail: + return NULL; +} + +int rte_intr_handle_elist_index_set(struct rte_intr_handle *intr_handle, + int index, struct rte_epoll_event elist) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (index >= intr_handle->nb_intr) { + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index, + intr_handle->nb_intr); + rte_errno = ERANGE; + goto fail; + } + + intr_handle->elist[index] = elist; + + return 0; +fail: + return rte_errno; +} + +int *rte_intr_handle_vec_list_base(const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + return NULL; + } + + return intr_handle->intr_vec; +} + +int rte_intr_handle_vec_list_alloc(struct rte_intr_handle *intr_handle, + const char *name, int size) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + /* Vector list already allocated */ + if (intr_handle->intr_vec) + return 0; + + if (size > intr_handle->nb_intr) { + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", size, + intr_handle->nb_intr); + rte_errno = ERANGE; + goto fail; + } + + intr_handle->intr_vec = rte_zmalloc(name, size * sizeof(int), 0); + if (!intr_handle->intr_vec) { + RTE_LOG(ERR, EAL, "Failed to allocate %d intr_vec", size); + rte_errno = ENOMEM; + goto fail; + } + + intr_handle->vec_list_size = size; + + return 0; +fail: + return rte_errno; +} + +int rte_intr_handle_vec_list_index_get( + const struct rte_intr_handle *intr_handle, int index) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (!intr_handle->intr_vec) { + RTE_LOG(ERR, EAL, "Intr vector list not allocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (index > intr_handle->vec_list_size) { + RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n", + index, intr_handle->vec_list_size); + rte_errno = ERANGE; + goto fail; + } + + return intr_handle->intr_vec[index]; +fail: + return rte_errno; +} + +int rte_intr_handle_vec_list_index_set(struct rte_intr_handle *intr_handle, + int index, int vec) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (!intr_handle->intr_vec) { + RTE_LOG(ERR, EAL, "Intr vector list not allocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (index > intr_handle->vec_list_size) { + RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n", + index, intr_handle->vec_list_size); + rte_errno = ERANGE; + goto fail; + } + + intr_handle->intr_vec[index] = vec; + + return 0; +fail: + return rte_errno; +} + +void rte_intr_handle_vec_list_free(struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + } + + rte_free(intr_handle->intr_vec); + intr_handle->intr_vec = NULL; +} diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build index edfca77779..47f2977539 100644 --- a/lib/eal/common/meson.build +++ b/lib/eal/common/meson.build @@ -17,6 +17,7 @@ if is_windows 'eal_common_errno.c', 'eal_common_fbarray.c', 'eal_common_hexdump.c', + 'eal_common_interrupts.c', 'eal_common_launch.c', 'eal_common_lcore.c', 'eal_common_log.c', @@ -53,6 +54,7 @@ sources += files( 'eal_common_fbarray.c', 'eal_common_hexdump.c', 'eal_common_hypervisor.c', + 'eal_common_interrupts.c', 'eal_common_launch.c', 'eal_common_lcore.c', 'eal_common_log.c', diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h index 68ca3a042d..216aece61b 100644 --- a/lib/eal/include/rte_eal_interrupts.h +++ b/lib/eal/include/rte_eal_interrupts.h @@ -55,13 +55,17 @@ struct rte_intr_handle { }; void *handle; /**< device driver handle (Windows) */ }; + bool alloc_from_hugepage; enum rte_intr_handle_type type; /**< handle type */ uint32_t max_intr; /**< max interrupt requested */ uint32_t nb_efd; /**< number of available efd(event fd) */ uint8_t efd_counter_size; /**< size of efd counter, used for vdev */ + uint16_t nb_intr; + /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */ int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */ struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID]; - /**< intr vector epoll event */ + /**< intr vector epoll event */ + uint16_t vec_list_size; int *intr_vec; /**< intr vector number array */ }; diff --git a/lib/eal/version.map b/lib/eal/version.map index beeb986adc..56108d0998 100644 --- a/lib/eal/version.map +++ b/lib/eal/version.map @@ -426,6 +426,36 @@ EXPERIMENTAL { # added in 21.08 rte_power_monitor_multi; # WINDOWS_NO_EXPORT + + # added in 21.11 + rte_intr_handle_fd_set; + rte_intr_handle_fd_get; + rte_intr_handle_dev_fd_set; + rte_intr_handle_dev_fd_get; + rte_intr_handle_type_set; + rte_intr_handle_type_get; + rte_intr_handle_instance_alloc; + rte_intr_handle_instance_index_get; + rte_intr_handle_instance_free; + rte_intr_handle_instance_index_set; + rte_intr_handle_event_list_update; + rte_intr_handle_max_intr_set; + rte_intr_handle_max_intr_get; + rte_intr_handle_nb_efd_set; + rte_intr_handle_nb_efd_get; + rte_intr_handle_nb_intr_get; + rte_intr_handle_efds_index_set; + rte_intr_handle_efds_index_get; + rte_intr_handle_efds_base; + rte_intr_handle_elist_index_set; + rte_intr_handle_elist_index_get; + rte_intr_handle_efd_counter_size_set; + rte_intr_handle_efd_counter_size_get; + rte_intr_handle_vec_list_alloc; + rte_intr_handle_vec_list_index_set; + rte_intr_handle_vec_list_index_get; + rte_intr_handle_vec_list_free; + rte_intr_handle_vec_list_base; }; INTERNAL { From patchwork Fri Sep 3 12:40:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 97933 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7A290A0C54; Fri, 3 Sep 2021 14:42:12 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 03F184111C; Fri, 3 Sep 2021 14:42:04 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 9E70841109 for ; Fri, 3 Sep 2021 14:42:02 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 1837tLo5012010; Fri, 3 Sep 2021 05:42:02 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=BM72S7joyPsV8hTSGkkTAwHCiVTs9R6ZxCAe5mG1aWw=; b=Iuuxs8aet6NC1bjzxAgCxEVjsSdxKCOMFHX8a2497VKf4GatM9dZTTkH2LFZ7yGswhLR qZGegMkRimHhHKRmOXvlkI0hjpP55B43VUazU106lXChC7dKl2RhGj0HGdntwCb8WjdX jpmpyVE/jpsJRNMHrAxHb6Bb7QIrNq8a7zAeH7vkeo8N0CocEywmvyblX6UbISCsVM2C +pws6K6a6FKqz9tXCOPzdg12F24QAMCGM1cTdGd0Kx8Mc4SCliBDlTyCMuSSv+NiQrfK +BAM4RP4BCZ86ZMkqtLHXDYKNRdQrcwdUNdqu0vmnFHOlpxrc4BnJ5Ix72gf/P0RqDL7 fA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 3aufr890u7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 03 Sep 2021 05:42:01 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 3 Sep 2021 05:41:59 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 3 Sep 2021 05:41:59 -0700 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 31BA15B6945; Fri, 3 Sep 2021 05:41:57 -0700 (PDT) From: Harman Kalra To: , Harman Kalra , Bruce Richardson Date: Fri, 3 Sep 2021 18:10:58 +0530 Message-ID: <20210903124102.47425-4-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20210903124102.47425-1-hkalra@marvell.com> References: <20210826145726.102081-1-hkalra@marvell.com> <20210903124102.47425-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Gq2k3GA9XFLqAYaoVx8_54UJbHT0wlQU X-Proofpoint-GUID: Gq2k3GA9XFLqAYaoVx8_54UJbHT0wlQU X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-03_03,2021-09-03_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH v1 3/7] eal/interrupts: avoid direct access to interrupt handle X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Making changes to the interrupt framework to use interrupt handle APIs to get/set any field. Direct access to any of the fields should be avoided to avoid any ABI breakage in future. Signed-off-by: Harman Kalra --- lib/eal/freebsd/eal_interrupts.c | 94 ++++++---- lib/eal/linux/eal_interrupts.c | 294 +++++++++++++++++++------------ 2 files changed, 242 insertions(+), 146 deletions(-) diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c index 86810845fe..171006f19f 100644 --- a/lib/eal/freebsd/eal_interrupts.c +++ b/lib/eal/freebsd/eal_interrupts.c @@ -40,7 +40,7 @@ struct rte_intr_callback { struct rte_intr_source { TAILQ_ENTRY(rte_intr_source) next; - struct rte_intr_handle intr_handle; /**< interrupt handle */ + struct rte_intr_handle *intr_handle; /**< interrupt handle */ struct rte_intr_cb_list callbacks; /**< user callbacks */ uint32_t active; }; @@ -60,7 +60,7 @@ static int intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke) { /* alarm callbacks are special case */ - if (ih->type == RTE_INTR_HANDLE_ALARM) { + if (rte_intr_handle_type_get(ih) == RTE_INTR_HANDLE_ALARM) { uint64_t timeout_ns; /* get soonest alarm timeout */ @@ -75,7 +75,7 @@ intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke) } else { ke->filter = EVFILT_READ; } - ke->ident = ih->fd; + ke->ident = rte_intr_handle_fd_get(ih); return 0; } @@ -89,7 +89,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, int ret = 0, add_event = 0; /* first do parameter checking */ - if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) { + if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0 || + cb == NULL) { RTE_LOG(ERR, EAL, "Registering with invalid input parameter\n"); return -EINVAL; @@ -103,7 +104,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, /* find the source for this intr_handle */ TAILQ_FOREACH(src, &intr_sources, next) { - if (src->intr_handle.fd == intr_handle->fd) + if (rte_intr_handle_fd_get(src->intr_handle) == + rte_intr_handle_fd_get(intr_handle)) break; } @@ -112,8 +114,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, * thing on the list should be eal_alarm_callback() and we may * be called just to reset the timer. */ - if (src != NULL && src->intr_handle.type == RTE_INTR_HANDLE_ALARM && - !TAILQ_EMPTY(&src->callbacks)) { + if (src != NULL && rte_intr_handle_type_get(src->intr_handle) == + RTE_INTR_HANDLE_ALARM && !TAILQ_EMPTY(&src->callbacks)) { callback = NULL; } else { /* allocate a new interrupt callback entity */ @@ -135,9 +137,20 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, ret = -ENOMEM; goto fail; } else { - src->intr_handle = *intr_handle; - TAILQ_INIT(&src->callbacks); - TAILQ_INSERT_TAIL(&intr_sources, src, next); + src->intr_handle = + rte_intr_handle_instance_alloc( + RTE_INTR_HANDLE_DEFAULT_SIZE, false); + if (src->intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Can not create intr instance\n"); + free(callback); + ret = -ENOMEM; + } else { + rte_intr_handle_instance_index_set( + src->intr_handle, intr_handle, 0); + TAILQ_INIT(&src->callbacks); + TAILQ_INSERT_TAIL(&intr_sources, src, + next); + } } } @@ -151,7 +164,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, /* add events to the queue. timer events are special as we need to * re-set the timer. */ - if (add_event || src->intr_handle.type == RTE_INTR_HANDLE_ALARM) { + if (add_event || rte_intr_handle_type_get(src->intr_handle) == + RTE_INTR_HANDLE_ALARM) { struct kevent ke; memset(&ke, 0, sizeof(ke)); @@ -173,12 +187,13 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, */ if (errno == ENODEV) RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n", - src->intr_handle.fd); + rte_intr_handle_fd_get(src->intr_handle)); else RTE_LOG(ERR, EAL, "Error adding fd %d " - "kevent, %s\n", - src->intr_handle.fd, - strerror(errno)); + "kevent, %s\n", + rte_intr_handle_fd_get( + src->intr_handle), + strerror(errno)); ret = -errno; goto fail; } @@ -213,7 +228,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle, struct rte_intr_callback *cb, *next; /* do parameter checking first */ - if (intr_handle == NULL || intr_handle->fd < 0) { + if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) { RTE_LOG(ERR, EAL, "Unregistering with invalid input parameter\n"); return -EINVAL; @@ -228,7 +243,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle, /* check if the insterrupt source for the fd is existent */ TAILQ_FOREACH(src, &intr_sources, next) - if (src->intr_handle.fd == intr_handle->fd) + if (rte_intr_handle_fd_get(src->intr_handle) == + rte_intr_handle_fd_get(intr_handle)) break; /* No interrupt source registered for the fd */ @@ -268,7 +284,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, struct rte_intr_callback *cb, *next; /* do parameter checking first */ - if (intr_handle == NULL || intr_handle->fd < 0) { + if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) { RTE_LOG(ERR, EAL, "Unregistering with invalid input parameter\n"); return -EINVAL; @@ -282,7 +298,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, /* check if the insterrupt source for the fd is existent */ TAILQ_FOREACH(src, &intr_sources, next) - if (src->intr_handle.fd == intr_handle->fd) + if (rte_intr_handle_fd_get(src->intr_handle) == + rte_intr_handle_fd_get(intr_handle)) break; /* No interrupt source registered for the fd */ @@ -314,7 +331,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, */ if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) { RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n", - src->intr_handle.fd, strerror(errno)); + rte_intr_handle_fd_get(src->intr_handle), + strerror(errno)); /* removing non-existent even is an expected condition * in some circumstances (e.g. oneshot events). */ @@ -365,17 +383,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle) if (intr_handle == NULL) return -1; - if (intr_handle->type == RTE_INTR_HANDLE_VDEV) { + if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) { rc = 0; goto out; } - if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) { + if (rte_intr_handle_fd_get(intr_handle) < 0 || + rte_intr_handle_dev_fd_get(intr_handle) < 0) { rc = -1; goto out; } - switch (intr_handle->type) { + switch (rte_intr_handle_type_get(intr_handle)) { /* not used at this moment */ case RTE_INTR_HANDLE_ALARM: rc = -1; @@ -388,7 +407,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle) default: RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", - intr_handle->fd); + rte_intr_handle_fd_get(intr_handle)); rc = -1; break; } @@ -406,17 +425,18 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle) if (intr_handle == NULL) return -1; - if (intr_handle->type == RTE_INTR_HANDLE_VDEV) { + if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) { rc = 0; goto out; } - if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) { + if (rte_intr_handle_fd_get(intr_handle) < 0 || + rte_intr_handle_dev_fd_get(intr_handle) < 0) { rc = -1; goto out; } - switch (intr_handle->type) { + switch (rte_intr_handle_type_get(intr_handle)) { /* not used at this moment */ case RTE_INTR_HANDLE_ALARM: rc = -1; @@ -429,7 +449,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle) default: RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", - intr_handle->fd); + rte_intr_handle_fd_get(intr_handle)); rc = -1; break; } @@ -441,7 +461,8 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle) int rte_intr_ack(const struct rte_intr_handle *intr_handle) { - if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV) + if (intr_handle && + rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) return 0; return -1; @@ -463,7 +484,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds) rte_spinlock_lock(&intr_lock); TAILQ_FOREACH(src, &intr_sources, next) - if (src->intr_handle.fd == event_fd) + if (rte_intr_handle_fd_get(src->intr_handle) == + event_fd) break; if (src == NULL) { rte_spinlock_unlock(&intr_lock); @@ -475,7 +497,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds) rte_spinlock_unlock(&intr_lock); /* set the length to be read dor different handle type */ - switch (src->intr_handle.type) { + switch (rte_intr_handle_type_get(src->intr_handle)) { case RTE_INTR_HANDLE_ALARM: bytes_read = 0; call = true; @@ -546,7 +568,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds) /* mark for deletion from the queue */ ke.flags = EV_DELETE; - if (intr_source_to_kevent(&src->intr_handle, &ke) < 0) { + if (intr_source_to_kevent(src->intr_handle, + &ke) < 0) { RTE_LOG(ERR, EAL, "Cannot convert to kevent\n"); rte_spinlock_unlock(&intr_lock); return; @@ -557,7 +580,9 @@ eal_intr_process_interrupts(struct kevent *events, int nfds) */ if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) { RTE_LOG(ERR, EAL, "Error removing fd %d kevent, " - "%s\n", src->intr_handle.fd, + "%s\n", + rte_intr_handle_fd_get( + src->intr_handle), strerror(errno)); /* removing non-existent even is an expected * condition in some circumstances @@ -567,7 +592,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds) TAILQ_REMOVE(&src->callbacks, cb, next); if (cb->ucb_fn) - cb->ucb_fn(&src->intr_handle, cb->cb_arg); + cb->ucb_fn(src->intr_handle, + cb->cb_arg); free(cb); } } diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c index 22b3b7bcd9..570eddf088 100644 --- a/lib/eal/linux/eal_interrupts.c +++ b/lib/eal/linux/eal_interrupts.c @@ -20,6 +20,7 @@ #include #include +#include #include #include #include @@ -82,7 +83,7 @@ struct rte_intr_callback { struct rte_intr_source { TAILQ_ENTRY(rte_intr_source) next; - struct rte_intr_handle intr_handle; /**< interrupt handle */ + struct rte_intr_handle *intr_handle; /**< interrupt handle */ struct rte_intr_cb_list callbacks; /**< user callbacks */ uint32_t active; }; @@ -112,7 +113,7 @@ static int vfio_enable_intx(const struct rte_intr_handle *intr_handle) { struct vfio_irq_set *irq_set; char irq_set_buf[IRQ_SET_BUF_LEN]; - int len, ret; + int len, ret, vfio_dev_fd; int *fd_ptr; len = sizeof(irq_set_buf); @@ -125,13 +126,14 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) { irq_set->index = VFIO_PCI_INTX_IRQ_INDEX; irq_set->start = 0; fd_ptr = (int *) &irq_set->data; - *fd_ptr = intr_handle->fd; + *fd_ptr = rte_intr_handle_fd_get(intr_handle); - ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle); + ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n", - intr_handle->fd); + rte_intr_handle_fd_get(intr_handle)); return -1; } @@ -144,11 +146,11 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) { irq_set->index = VFIO_PCI_INTX_IRQ_INDEX; irq_set->start = 0; - ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n", - intr_handle->fd); + rte_intr_handle_fd_get(intr_handle)); return -1; } return 0; @@ -159,7 +161,7 @@ static int vfio_disable_intx(const struct rte_intr_handle *intr_handle) { struct vfio_irq_set *irq_set; char irq_set_buf[IRQ_SET_BUF_LEN]; - int len, ret; + int len, ret, vfio_dev_fd; len = sizeof(struct vfio_irq_set); @@ -171,11 +173,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) { irq_set->index = VFIO_PCI_INTX_IRQ_INDEX; irq_set->start = 0; - ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle); + ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n", - intr_handle->fd); + rte_intr_handle_fd_get(intr_handle)); return -1; } @@ -187,11 +190,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) { irq_set->index = VFIO_PCI_INTX_IRQ_INDEX; irq_set->start = 0; - ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { RTE_LOG(ERR, EAL, - "Error disabling INTx interrupts for fd %d\n", intr_handle->fd); + "Error disabling INTx interrupts for fd %d\n", + rte_intr_handle_fd_get(intr_handle)); return -1; } return 0; @@ -202,6 +206,7 @@ static int vfio_ack_intx(const struct rte_intr_handle *intr_handle) { struct vfio_irq_set irq_set; + int vfio_dev_fd; /* unmask INTx */ memset(&irq_set, 0, sizeof(irq_set)); @@ -211,9 +216,10 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle) irq_set.index = VFIO_PCI_INTX_IRQ_INDEX; irq_set.start = 0; - if (ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) { + vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle); + if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) { RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n", - intr_handle->fd); + rte_intr_handle_fd_get(intr_handle)); return -1; } return 0; @@ -225,7 +231,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) { int len, ret; char irq_set_buf[IRQ_SET_BUF_LEN]; struct vfio_irq_set *irq_set; - int *fd_ptr; + int *fd_ptr, vfio_dev_fd; len = sizeof(irq_set_buf); @@ -236,13 +242,14 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) { irq_set->index = VFIO_PCI_MSI_IRQ_INDEX; irq_set->start = 0; fd_ptr = (int *) &irq_set->data; - *fd_ptr = intr_handle->fd; + *fd_ptr = rte_intr_handle_fd_get(intr_handle); - ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle); + ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n", - intr_handle->fd); + rte_intr_handle_fd_get(intr_handle)); return -1; } return 0; @@ -253,7 +260,7 @@ static int vfio_disable_msi(const struct rte_intr_handle *intr_handle) { struct vfio_irq_set *irq_set; char irq_set_buf[IRQ_SET_BUF_LEN]; - int len, ret; + int len, ret, vfio_dev_fd; len = sizeof(struct vfio_irq_set); @@ -264,11 +271,13 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) { irq_set->index = VFIO_PCI_MSI_IRQ_INDEX; irq_set->start = 0; - ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle); + ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) RTE_LOG(ERR, EAL, - "Error disabling MSI interrupts for fd %d\n", intr_handle->fd); + "Error disabling MSI interrupts for fd %d\n", + rte_intr_handle_fd_get(intr_handle)); return ret; } @@ -279,30 +288,34 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) { int len, ret; char irq_set_buf[MSIX_IRQ_SET_BUF_LEN]; struct vfio_irq_set *irq_set; - int *fd_ptr; + int *fd_ptr, vfio_dev_fd, i; len = sizeof(irq_set_buf); irq_set = (struct vfio_irq_set *) irq_set_buf; irq_set->argsz = len; /* 0 < irq_set->count < RTE_MAX_RXTX_INTR_VEC_ID + 1 */ - irq_set->count = intr_handle->max_intr ? - (intr_handle->max_intr > RTE_MAX_RXTX_INTR_VEC_ID + 1 ? - RTE_MAX_RXTX_INTR_VEC_ID + 1 : intr_handle->max_intr) : 1; + irq_set->count = rte_intr_handle_max_intr_get(intr_handle) ? + (rte_intr_handle_max_intr_get(intr_handle) > + RTE_MAX_RXTX_INTR_VEC_ID + 1 ? RTE_MAX_RXTX_INTR_VEC_ID + 1 : + rte_intr_handle_max_intr_get(intr_handle)) : 1; + irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER; irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX; irq_set->start = 0; fd_ptr = (int *) &irq_set->data; /* INTR vector offset 0 reserve for non-efds mapping */ - fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = intr_handle->fd; - memcpy(&fd_ptr[RTE_INTR_VEC_RXTX_OFFSET], intr_handle->efds, - sizeof(*intr_handle->efds) * intr_handle->nb_efd); + fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = rte_intr_handle_fd_get(intr_handle); + for (i = 0; i < rte_intr_handle_nb_efd_get(intr_handle); i++) + fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = + rte_intr_handle_efds_index_get(intr_handle, i); - ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle); + ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n", - intr_handle->fd); + rte_intr_handle_fd_get(intr_handle)); return -1; } @@ -314,7 +327,7 @@ static int vfio_disable_msix(const struct rte_intr_handle *intr_handle) { struct vfio_irq_set *irq_set; char irq_set_buf[MSIX_IRQ_SET_BUF_LEN]; - int len, ret; + int len, ret, vfio_dev_fd; len = sizeof(struct vfio_irq_set); @@ -325,11 +338,13 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) { irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX; irq_set->start = 0; - ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle); + ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) RTE_LOG(ERR, EAL, - "Error disabling MSI-X interrupts for fd %d\n", intr_handle->fd); + "Error disabling MSI-X interrupts for fd %d\n", + rte_intr_handle_fd_get(intr_handle)); return ret; } @@ -342,7 +357,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle) int len, ret; char irq_set_buf[IRQ_SET_BUF_LEN]; struct vfio_irq_set *irq_set; - int *fd_ptr; + int *fd_ptr, vfio_dev_fd; len = sizeof(irq_set_buf); @@ -354,13 +369,14 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle) irq_set->index = VFIO_PCI_REQ_IRQ_INDEX; irq_set->start = 0; fd_ptr = (int *) &irq_set->data; - *fd_ptr = intr_handle->fd; + *fd_ptr = rte_intr_handle_fd_get(intr_handle); - ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle); + ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n", - intr_handle->fd); + rte_intr_handle_fd_get(intr_handle)); return -1; } @@ -373,7 +389,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle) { struct vfio_irq_set *irq_set; char irq_set_buf[IRQ_SET_BUF_LEN]; - int len, ret; + int len, ret, vfio_dev_fd; len = sizeof(struct vfio_irq_set); @@ -384,11 +400,12 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle) irq_set->index = VFIO_PCI_REQ_IRQ_INDEX; irq_set->start = 0; - ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle); + ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n", - intr_handle->fd); + rte_intr_handle_fd_get(intr_handle)); return ret; } @@ -399,20 +416,22 @@ static int uio_intx_intr_disable(const struct rte_intr_handle *intr_handle) { unsigned char command_high; + int uio_cfg_fd; /* use UIO config file descriptor for uio_pci_generic */ - if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) { + uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle); + if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) { RTE_LOG(ERR, EAL, "Error reading interrupts status for fd %d\n", - intr_handle->uio_cfg_fd); + uio_cfg_fd); return -1; } /* disable interrupts */ command_high |= 0x4; - if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) { + if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) { RTE_LOG(ERR, EAL, "Error disabling interrupts for fd %d\n", - intr_handle->uio_cfg_fd); + uio_cfg_fd); return -1; } @@ -423,20 +442,22 @@ static int uio_intx_intr_enable(const struct rte_intr_handle *intr_handle) { unsigned char command_high; + int uio_cfg_fd; /* use UIO config file descriptor for uio_pci_generic */ - if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) { + uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle); + if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) { RTE_LOG(ERR, EAL, "Error reading interrupts status for fd %d\n", - intr_handle->uio_cfg_fd); + uio_cfg_fd); return -1; } /* enable interrupts */ command_high &= ~0x4; - if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) { + if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) { RTE_LOG(ERR, EAL, "Error enabling interrupts for fd %d\n", - intr_handle->uio_cfg_fd); + uio_cfg_fd); return -1; } @@ -448,10 +469,11 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle) { const int value = 0; - if (write(intr_handle->fd, &value, sizeof(value)) < 0) { + if (write(rte_intr_handle_fd_get(intr_handle), &value, + sizeof(value)) < 0) { RTE_LOG(ERR, EAL, "Error disabling interrupts for fd %d (%s)\n", - intr_handle->fd, strerror(errno)); + rte_intr_handle_fd_get(intr_handle), strerror(errno)); return -1; } return 0; @@ -462,10 +484,11 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle) { const int value = 1; - if (write(intr_handle->fd, &value, sizeof(value)) < 0) { + if (write(rte_intr_handle_fd_get(intr_handle), &value, + sizeof(value)) < 0) { RTE_LOG(ERR, EAL, "Error enabling interrupts for fd %d (%s)\n", - intr_handle->fd, strerror(errno)); + rte_intr_handle_fd_get(intr_handle), strerror(errno)); return -1; } return 0; @@ -482,7 +505,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, wake_thread = 0; /* first do parameter checking */ - if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) { + if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0 || + cb == NULL) { RTE_LOG(ERR, EAL, "Registering with invalid input parameter\n"); return -EINVAL; @@ -503,7 +527,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, /* check if there is at least one callback registered for the fd */ TAILQ_FOREACH(src, &intr_sources, next) { - if (src->intr_handle.fd == intr_handle->fd) { + if (rte_intr_handle_fd_get(src->intr_handle) == + rte_intr_handle_fd_get(intr_handle)) { /* we had no interrupts for this */ if (TAILQ_EMPTY(&src->callbacks)) wake_thread = 1; @@ -522,12 +547,22 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, free(callback); ret = -ENOMEM; } else { - src->intr_handle = *intr_handle; - TAILQ_INIT(&src->callbacks); - TAILQ_INSERT_TAIL(&(src->callbacks), callback, next); - TAILQ_INSERT_TAIL(&intr_sources, src, next); - wake_thread = 1; - ret = 0; + src->intr_handle = rte_intr_handle_instance_alloc( + RTE_INTR_HANDLE_DEFAULT_SIZE, false); + if (src->intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Can not create intr instance\n"); + free(callback); + ret = -ENOMEM; + } else { + rte_intr_handle_instance_index_set( + src->intr_handle, intr_handle, 0); + TAILQ_INIT(&src->callbacks); + TAILQ_INSERT_TAIL(&(src->callbacks), callback, + next); + TAILQ_INSERT_TAIL(&intr_sources, src, next); + wake_thread = 1; + ret = 0; + } } } @@ -555,7 +590,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle, struct rte_intr_callback *cb, *next; /* do parameter checking first */ - if (intr_handle == NULL || intr_handle->fd < 0) { + if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) { RTE_LOG(ERR, EAL, "Unregistering with invalid input parameter\n"); return -EINVAL; @@ -565,7 +600,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle, /* check if the insterrupt source for the fd is existent */ TAILQ_FOREACH(src, &intr_sources, next) - if (src->intr_handle.fd == intr_handle->fd) + if (rte_intr_handle_fd_get(src->intr_handle) == + rte_intr_handle_fd_get(intr_handle)) break; /* No interrupt source registered for the fd */ @@ -605,7 +641,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, struct rte_intr_callback *cb, *next; /* do parameter checking first */ - if (intr_handle == NULL || intr_handle->fd < 0) { + if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) { RTE_LOG(ERR, EAL, "Unregistering with invalid input parameter\n"); return -EINVAL; @@ -615,7 +651,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, /* check if the insterrupt source for the fd is existent */ TAILQ_FOREACH(src, &intr_sources, next) - if (src->intr_handle.fd == intr_handle->fd) + if (rte_intr_handle_fd_get(src->intr_handle) == + rte_intr_handle_fd_get(intr_handle)) break; /* No interrupt source registered for the fd */ @@ -646,6 +683,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, /* all callbacks for that source are removed. */ if (TAILQ_EMPTY(&src->callbacks)) { TAILQ_REMOVE(&intr_sources, src, next); + rte_intr_handle_instance_free(src->intr_handle); free(src); } } @@ -677,22 +715,23 @@ rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle, int rte_intr_enable(const struct rte_intr_handle *intr_handle) { - int rc = 0; + int rc = 0, uio_cfg_fd; if (intr_handle == NULL) return -1; - if (intr_handle->type == RTE_INTR_HANDLE_VDEV) { + if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) { rc = 0; goto out; } - if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) { + uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle); + if (rte_intr_handle_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) { rc = -1; goto out; } - switch (intr_handle->type){ + switch (rte_intr_handle_type_get(intr_handle)) { /* write to the uio fd to enable the interrupt */ case RTE_INTR_HANDLE_UIO: if (uio_intr_enable(intr_handle)) @@ -734,7 +773,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle) default: RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", - intr_handle->fd); + rte_intr_handle_fd_get(intr_handle)); rc = -1; break; } @@ -757,13 +796,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle) int rte_intr_ack(const struct rte_intr_handle *intr_handle) { - if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV) + int uio_cfg_fd; + + if (intr_handle && rte_intr_handle_type_get(intr_handle) == + RTE_INTR_HANDLE_VDEV) return 0; - if (!intr_handle || intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) + uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle); + if (!intr_handle || rte_intr_handle_fd_get(intr_handle) < 0 || + uio_cfg_fd < 0) return -1; - switch (intr_handle->type) { + switch (rte_intr_handle_type_get(intr_handle)) { /* Both acking and enabling are same for UIO */ case RTE_INTR_HANDLE_UIO: if (uio_intr_enable(intr_handle)) @@ -796,7 +840,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle) /* unknown handle type */ default: RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", - intr_handle->fd); + rte_intr_handle_fd_get(intr_handle)); return -1; } @@ -806,22 +850,23 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle) int rte_intr_disable(const struct rte_intr_handle *intr_handle) { - int rc = 0; + int rc = 0, uio_cfg_fd; if (intr_handle == NULL) return -1; - if (intr_handle->type == RTE_INTR_HANDLE_VDEV) { + if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) { rc = 0; goto out; } - if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) { + uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle); + if (rte_intr_handle_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) { rc = -1; goto out; } - switch (intr_handle->type){ + switch (rte_intr_handle_type_get(intr_handle)) { /* write to the uio fd to disable the interrupt */ case RTE_INTR_HANDLE_UIO: if (uio_intr_disable(intr_handle)) @@ -863,7 +908,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle) default: RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", - intr_handle->fd); + rte_intr_handle_fd_get(intr_handle)); rc = -1; break; } @@ -896,7 +941,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds) } rte_spinlock_lock(&intr_lock); TAILQ_FOREACH(src, &intr_sources, next) - if (src->intr_handle.fd == + if (rte_intr_handle_fd_get(src->intr_handle) == events[n].data.fd) break; if (src == NULL){ @@ -909,7 +954,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds) rte_spinlock_unlock(&intr_lock); /* set the length to be read dor different handle type */ - switch (src->intr_handle.type) { + switch (rte_intr_handle_type_get(src->intr_handle)) { case RTE_INTR_HANDLE_UIO: case RTE_INTR_HANDLE_UIO_INTX: bytes_read = sizeof(buf.uio_intr_count); @@ -973,6 +1018,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds) TAILQ_REMOVE(&src->callbacks, cb, next); free(cb); } + rte_intr_handle_instance_free(src->intr_handle); free(src); return -1; } else if (bytes_read == 0) @@ -1012,7 +1058,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds) if (cb->pending_delete) { TAILQ_REMOVE(&src->callbacks, cb, next); if (cb->ucb_fn) - cb->ucb_fn(&src->intr_handle, cb->cb_arg); + cb->ucb_fn(src->intr_handle, + cb->cb_arg); free(cb); rv++; } @@ -1021,6 +1068,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds) /* all callbacks for that source are removed. */ if (TAILQ_EMPTY(&src->callbacks)) { TAILQ_REMOVE(&intr_sources, src, next); + rte_intr_handle_instance_free(src->intr_handle); free(src); } @@ -1123,16 +1171,18 @@ eal_intr_thread_main(__rte_unused void *arg) continue; /* skip those with no callbacks */ memset(&ev, 0, sizeof(ev)); ev.events = EPOLLIN | EPOLLPRI | EPOLLRDHUP | EPOLLHUP; - ev.data.fd = src->intr_handle.fd; + ev.data.fd = rte_intr_handle_fd_get(src->intr_handle); /** * add all the uio device file descriptor * into wait list. */ if (epoll_ctl(pfd, EPOLL_CTL_ADD, - src->intr_handle.fd, &ev) < 0){ + rte_intr_handle_fd_get(src->intr_handle), + &ev) < 0) { rte_panic("Error adding fd %d epoll_ctl, %s\n", - src->intr_handle.fd, strerror(errno)); + rte_intr_handle_fd_get(src->intr_handle), + strerror(errno)); } else numfds++; @@ -1185,7 +1235,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle) int bytes_read = 0; int nbytes; - switch (intr_handle->type) { + switch (rte_intr_handle_type_get(intr_handle)) { case RTE_INTR_HANDLE_UIO: case RTE_INTR_HANDLE_UIO_INTX: bytes_read = sizeof(buf.uio_intr_count); @@ -1198,7 +1248,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle) break; #endif case RTE_INTR_HANDLE_VDEV: - bytes_read = intr_handle->efd_counter_size; + bytes_read = rte_intr_handle_efd_counter_size_get(intr_handle); /* For vdev, number of bytes to read is set by driver */ break; case RTE_INTR_HANDLE_EXT: @@ -1419,8 +1469,8 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ? (vec - RTE_INTR_VEC_RXTX_OFFSET) : vec; - if (!intr_handle || intr_handle->nb_efd == 0 || - efd_idx >= intr_handle->nb_efd) { + if (!intr_handle || rte_intr_handle_nb_efd_get(intr_handle) == 0 || + efd_idx >= (unsigned int)rte_intr_handle_nb_efd_get(intr_handle)) { RTE_LOG(ERR, EAL, "Wrong intr vector number.\n"); return -EPERM; } @@ -1428,7 +1478,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, switch (op) { case RTE_INTR_EVENT_ADD: epfd_op = EPOLL_CTL_ADD; - rev = &intr_handle->elist[efd_idx]; + rev = rte_intr_handle_elist_index_get(intr_handle, efd_idx); if (__atomic_load_n(&rev->status, __ATOMIC_RELAXED) != RTE_EPOLL_INVALID) { RTE_LOG(INFO, EAL, "Event already been added.\n"); @@ -1442,7 +1492,9 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, epdata->cb_fun = (rte_intr_event_cb_t)eal_intr_proc_rxtx_intr; epdata->cb_arg = (void *)intr_handle; rc = rte_epoll_ctl(epfd, epfd_op, - intr_handle->efds[efd_idx], rev); + rte_intr_handle_efds_index_get(intr_handle, + efd_idx), + rev); if (!rc) RTE_LOG(DEBUG, EAL, "efd %d associated with vec %d added on epfd %d" @@ -1452,7 +1504,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, break; case RTE_INTR_EVENT_DEL: epfd_op = EPOLL_CTL_DEL; - rev = &intr_handle->elist[efd_idx]; + rev = rte_intr_handle_elist_index_get(intr_handle, efd_idx); if (__atomic_load_n(&rev->status, __ATOMIC_RELAXED) == RTE_EPOLL_INVALID) { RTE_LOG(INFO, EAL, "Event does not exist.\n"); @@ -1477,8 +1529,9 @@ rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle) uint32_t i; struct rte_epoll_event *rev; - for (i = 0; i < intr_handle->nb_efd; i++) { - rev = &intr_handle->elist[i]; + for (i = 0; i < (uint32_t)rte_intr_handle_nb_efd_get(intr_handle); + i++) { + rev = rte_intr_handle_elist_index_get(intr_handle, i); if (__atomic_load_n(&rev->status, __ATOMIC_RELAXED) == RTE_EPOLL_INVALID) continue; @@ -1498,7 +1551,8 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd) assert(nb_efd != 0); - if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX) { + if (rte_intr_handle_type_get(intr_handle) == + RTE_INTR_HANDLE_VFIO_MSIX) { for (i = 0; i < n; i++) { fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); if (fd < 0) { @@ -1507,21 +1561,34 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd) errno, strerror(errno)); return -errno; } - intr_handle->efds[i] = fd; + + if (rte_intr_handle_efds_index_set(intr_handle, i, fd)) + return -rte_errno; } - intr_handle->nb_efd = n; - intr_handle->max_intr = NB_OTHER_INTR + n; - } else if (intr_handle->type == RTE_INTR_HANDLE_VDEV) { + + if (rte_intr_handle_nb_efd_set(intr_handle, n)) + return -rte_errno; + + if (rte_intr_handle_max_intr_set(intr_handle, + NB_OTHER_INTR + n)) + return -rte_errno; + } else if (rte_intr_handle_type_get(intr_handle) == + RTE_INTR_HANDLE_VDEV) { /* only check, initialization would be done in vdev driver.*/ - if (intr_handle->efd_counter_size > - sizeof(union rte_intr_read_buffer)) { + if ((uint64_t)rte_intr_handle_efd_counter_size_get(intr_handle) + > sizeof(union rte_intr_read_buffer)) { RTE_LOG(ERR, EAL, "the efd_counter_size is oversized"); return -EINVAL; } } else { - intr_handle->efds[0] = intr_handle->fd; - intr_handle->nb_efd = RTE_MIN(nb_efd, 1U); - intr_handle->max_intr = NB_OTHER_INTR; + if (rte_intr_handle_efds_index_set(intr_handle, 0, + rte_intr_handle_fd_get(intr_handle))) + return -rte_errno; + if (rte_intr_handle_nb_efd_set(intr_handle, + RTE_MIN(nb_efd, 1U))) + return -rte_errno; + if (rte_intr_handle_max_intr_set(intr_handle, NB_OTHER_INTR)) + return -rte_errno; } return 0; @@ -1533,18 +1600,20 @@ rte_intr_efd_disable(struct rte_intr_handle *intr_handle) uint32_t i; rte_intr_free_epoll_fd(intr_handle); - if (intr_handle->max_intr > intr_handle->nb_efd) { - for (i = 0; i < intr_handle->nb_efd; i++) - close(intr_handle->efds[i]); + if (rte_intr_handle_max_intr_get(intr_handle) > + rte_intr_handle_nb_efd_get(intr_handle)) { + for (i = 0; i < + (uint32_t)rte_intr_handle_nb_efd_get(intr_handle); i++) + close(rte_intr_handle_efds_index_get(intr_handle, i)); } - intr_handle->nb_efd = 0; - intr_handle->max_intr = 0; + rte_intr_handle_nb_efd_set(intr_handle, 0); + rte_intr_handle_max_intr_set(intr_handle, 0); } int rte_intr_dp_is_en(struct rte_intr_handle *intr_handle) { - return !(!intr_handle->nb_efd); + return !(!rte_intr_handle_nb_efd_get(intr_handle)); } int @@ -1553,16 +1622,17 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle) if (!rte_intr_dp_is_en(intr_handle)) return 1; else - return !!(intr_handle->max_intr - intr_handle->nb_efd); + return !!(rte_intr_handle_max_intr_get(intr_handle) - + rte_intr_handle_nb_efd_get(intr_handle)); } int rte_intr_cap_multiple(struct rte_intr_handle *intr_handle) { - if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX) + if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX) return 1; - if (intr_handle->type == RTE_INTR_HANDLE_VDEV) + if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) return 1; return 0; From patchwork Fri Sep 3 12:40:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 97934 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0E77EA0C54; Fri, 3 Sep 2021 14:42:20 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7D293410E3; Fri, 3 Sep 2021 14:42:07 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 17916410DC for ; Fri, 3 Sep 2021 14:42:06 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 1837tQF8012099 for ; Fri, 3 Sep 2021 05:42:05 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=wu9DsE71iPn3e/ZCR4S63ErigHchX37tDMn/KTIGpRA=; b=Ge+oZARFOeAStwRflXAFhzqhWGqP5WHFk5pQQhzW4mV3vYUzs8QXR4+nqoZ5f5uIMh4h e8ZlEv7zvqVlP2XObUdUcg8hqhAFZx7b8TOTQ0rgGiu5tE2l0q/Qq1DViFVvVgzwgy2G dyCQk0W9Phoh1pYMffYClOeyQsDwGws35K0HBeGQHbm4zmmTSEP9LEy/Vvq79RV4apPm XpLmeLFACIsgG+WggwkGcdZvCYPM686zMf+7Eect/OPnSHg4GaBWitQzpOrFL8C0Sd1V NP6GrRMNAeH/Ea4Anno6+7PeGOtcGqInnBXlkMK8rwPhV+rR7ErvrfkFlUV13cmdLysQ Iw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 3aufr890ub-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 03 Sep 2021 05:42:05 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 3 Sep 2021 05:42:03 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 3 Sep 2021 05:42:03 -0700 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 45FD25B6942; Fri, 3 Sep 2021 05:42:02 -0700 (PDT) From: Harman Kalra To: , Harman Kalra Date: Fri, 3 Sep 2021 18:10:59 +0530 Message-ID: <20210903124102.47425-5-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20210903124102.47425-1-hkalra@marvell.com> References: <20210826145726.102081-1-hkalra@marvell.com> <20210903124102.47425-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 796f237J6JYqPyaXD3i8Afmc4gkCnSOo X-Proofpoint-GUID: 796f237J6JYqPyaXD3i8Afmc4gkCnSOo X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-03_03,2021-09-03_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH v1 4/7] test/interrupt: apply get set interrupt handle APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Updating the interrupt testsuite to make use of interrupt handle get set APIs. Signed-off-by: Harman Kalra --- app/test/test_interrupts.c | 237 ++++++++++++++++++++++++------------- 1 file changed, 152 insertions(+), 85 deletions(-) diff --git a/app/test/test_interrupts.c b/app/test/test_interrupts.c index 233b14a70b..289bca66dd 100644 --- a/app/test/test_interrupts.c +++ b/app/test/test_interrupts.c @@ -27,7 +27,7 @@ enum test_interrupt_handle_type { /* flag of if callback is called */ static volatile int flag; -static struct rte_intr_handle intr_handles[TEST_INTERRUPT_HANDLE_MAX]; +static struct rte_intr_handle *intr_handles; static enum test_interrupt_handle_type test_intr_type = TEST_INTERRUPT_HANDLE_MAX; @@ -50,7 +50,7 @@ static union intr_pipefds pfds; static inline int test_interrupt_handle_sanity_check(struct rte_intr_handle *intr_handle) { - if (!intr_handle || intr_handle->fd < 0) + if (!intr_handle || rte_intr_handle_fd_get(intr_handle) < 0) return -1; return 0; @@ -62,31 +62,70 @@ test_interrupt_handle_sanity_check(struct rte_intr_handle *intr_handle) static int test_interrupt_init(void) { + struct rte_intr_handle *test_intr_handle; + if (pipe(pfds.pipefd) < 0) return -1; - intr_handles[TEST_INTERRUPT_HANDLE_INVALID].fd = -1; - intr_handles[TEST_INTERRUPT_HANDLE_INVALID].type = - RTE_INTR_HANDLE_UNKNOWN; + intr_handles = rte_intr_handle_instance_alloc(TEST_INTERRUPT_HANDLE_MAX, + false); + if (!intr_handles) + return -1; - intr_handles[TEST_INTERRUPT_HANDLE_VALID].fd = pfds.readfd; - intr_handles[TEST_INTERRUPT_HANDLE_VALID].type = - RTE_INTR_HANDLE_UNKNOWN; + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_INVALID); + if (!test_intr_handle) + return -1; + if (rte_intr_handle_fd_set(test_intr_handle, -1)) + return -1; + if (rte_intr_handle_type_set(test_intr_handle, RTE_INTR_HANDLE_UNKNOWN)) + return -1; - intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO].fd = pfds.readfd; - intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO].type = - RTE_INTR_HANDLE_UIO; + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_VALID); + if (!test_intr_handle) + return -1; + if (rte_intr_handle_fd_set(test_intr_handle, pfds.readfd)) + return -1; + if (rte_intr_handle_type_set(test_intr_handle, RTE_INTR_HANDLE_UNKNOWN)) + return -1; + + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_VALID_UIO); + if (!test_intr_handle) + return -1; + if (rte_intr_handle_fd_set(test_intr_handle, pfds.readfd)) + return -1; + if (rte_intr_handle_type_set(test_intr_handle, RTE_INTR_HANDLE_UIO)) + return -1; - intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM].fd = pfds.readfd; - intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM].type = - RTE_INTR_HANDLE_ALARM; + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_VALID_ALARM); + if (!test_intr_handle) + return -1; + if (rte_intr_handle_fd_set(test_intr_handle, pfds.readfd)) + return -1; + if (rte_intr_handle_type_set(test_intr_handle, RTE_INTR_HANDLE_ALARM)) + return -1; - intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT].fd = pfds.readfd; - intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT].type = - RTE_INTR_HANDLE_DEV_EVENT; + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT); + if (!test_intr_handle) + return -1; + if (rte_intr_handle_fd_set(test_intr_handle, pfds.readfd)) + return -1; + if (rte_intr_handle_type_set(test_intr_handle, + RTE_INTR_HANDLE_DEV_EVENT)) + return -1; - intr_handles[TEST_INTERRUPT_HANDLE_CASE1].fd = pfds.writefd; - intr_handles[TEST_INTERRUPT_HANDLE_CASE1].type = RTE_INTR_HANDLE_UIO; + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_CASE1); + if (!test_intr_handle) + return -1; + if (rte_intr_handle_fd_set(test_intr_handle, pfds.writefd)) + return -1; + if (rte_intr_handle_type_set(test_intr_handle, RTE_INTR_HANDLE_UIO)) + return -1; return 0; } @@ -97,6 +136,7 @@ test_interrupt_init(void) static int test_interrupt_deinit(void) { + rte_intr_handle_instance_free(intr_handles); close(pfds.pipefd[0]); close(pfds.pipefd[1]); @@ -125,8 +165,10 @@ test_interrupt_handle_compare(struct rte_intr_handle *intr_handle_l, if (!intr_handle_l || !intr_handle_r) return -1; - if (intr_handle_l->fd != intr_handle_r->fd || - intr_handle_l->type != intr_handle_r->type) + if (rte_intr_handle_fd_get(intr_handle_l) != + rte_intr_handle_fd_get(intr_handle_r) || + rte_intr_handle_type_get(intr_handle_l) != + rte_intr_handle_type_get(intr_handle_r)) return -1; return 0; @@ -178,6 +220,8 @@ static void test_interrupt_callback(void *arg) { struct rte_intr_handle *intr_handle = arg; + struct rte_intr_handle *test_intr_handle; + if (test_intr_type >= TEST_INTERRUPT_HANDLE_MAX) { printf("invalid interrupt type\n"); flag = -1; @@ -198,8 +242,9 @@ test_interrupt_callback(void *arg) return; } - if (test_interrupt_handle_compare(intr_handle, - &(intr_handles[test_intr_type])) == 0) + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + test_intr_type); + if (test_interrupt_handle_compare(intr_handle, test_intr_handle) == 0) flag = 1; } @@ -223,7 +268,7 @@ test_interrupt_callback_1(void *arg) static int test_interrupt_enable(void) { - struct rte_intr_handle test_intr_handle; + struct rte_intr_handle *test_intr_handle; /* check with null intr_handle */ if (rte_intr_enable(NULL) == 0) { @@ -232,46 +277,52 @@ test_interrupt_enable(void) } /* check with invalid intr_handle */ - test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID]; - if (rte_intr_enable(&test_intr_handle) == 0) { + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_INVALID); + if (rte_intr_enable(test_intr_handle) == 0) { printf("unexpectedly enable invalid intr_handle " "successfully\n"); return -1; } /* check with valid intr_handle */ - test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID]; - if (rte_intr_enable(&test_intr_handle) == 0) { + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_VALID); + if (rte_intr_enable(test_intr_handle) == 0) { printf("unexpectedly enable a specific intr_handle " "successfully\n"); return -1; } /* check with specific valid intr_handle */ - test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM]; - if (rte_intr_enable(&test_intr_handle) == 0) { + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_VALID_ALARM); + if (rte_intr_enable(test_intr_handle) == 0) { printf("unexpectedly enable a specific intr_handle " "successfully\n"); return -1; } /* check with specific valid intr_handle */ - test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT]; - if (rte_intr_enable(&test_intr_handle) == 0) { + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT); + if (rte_intr_enable(test_intr_handle) == 0) { printf("unexpectedly enable a specific intr_handle " "successfully\n"); return -1; } /* check with valid handler and its type */ - test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1]; - if (rte_intr_enable(&test_intr_handle) < 0) { + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_CASE1); + if (rte_intr_enable(test_intr_handle) < 0) { printf("fail to enable interrupt on a simulated handler\n"); return -1; } - test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO]; - if (rte_intr_enable(&test_intr_handle) == 0) { + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_VALID_UIO); + if (rte_intr_enable(test_intr_handle) == 0) { printf("unexpectedly enable a specific intr_handle " "successfully\n"); return -1; @@ -286,7 +337,7 @@ test_interrupt_enable(void) static int test_interrupt_disable(void) { - struct rte_intr_handle test_intr_handle; + struct rte_intr_handle *test_intr_handle; /* check with null intr_handle */ if (rte_intr_disable(NULL) == 0) { @@ -296,46 +347,52 @@ test_interrupt_disable(void) } /* check with invalid intr_handle */ - test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID]; - if (rte_intr_disable(&test_intr_handle) == 0) { + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_INVALID); + if (rte_intr_disable(test_intr_handle) == 0) { printf("unexpectedly disable invalid intr_handle " "successfully\n"); return -1; } /* check with valid intr_handle */ - test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID]; - if (rte_intr_disable(&test_intr_handle) == 0) { + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_VALID); + if (rte_intr_disable(test_intr_handle) == 0) { printf("unexpectedly disable a specific intr_handle " "successfully\n"); return -1; } /* check with specific valid intr_handle */ - test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM]; - if (rte_intr_disable(&test_intr_handle) == 0) { + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_VALID_ALARM); + if (rte_intr_disable(test_intr_handle) == 0) { printf("unexpectedly disable a specific intr_handle " "successfully\n"); return -1; } /* check with specific valid intr_handle */ - test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT]; - if (rte_intr_disable(&test_intr_handle) == 0) { + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT); + if (rte_intr_disable(test_intr_handle) == 0) { printf("unexpectedly disable a specific intr_handle " "successfully\n"); return -1; } /* check with valid handler and its type */ - test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1]; - if (rte_intr_disable(&test_intr_handle) < 0) { + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_CASE1); + if (rte_intr_disable(test_intr_handle) < 0) { printf("fail to disable interrupt on a simulated handler\n"); return -1; } - test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO]; - if (rte_intr_disable(&test_intr_handle) == 0) { + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_VALID_UIO); + if (rte_intr_disable(test_intr_handle) == 0) { printf("unexpectedly disable a specific intr_handle " "successfully\n"); return -1; @@ -351,13 +408,14 @@ static int test_interrupt_full_path_check(enum test_interrupt_handle_type intr_type) { int count; - struct rte_intr_handle test_intr_handle; + struct rte_intr_handle *test_intr_handle; flag = 0; - test_intr_handle = intr_handles[intr_type]; + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + intr_type); test_intr_type = intr_type; - if (rte_intr_callback_register(&test_intr_handle, - test_interrupt_callback, &test_intr_handle) < 0) { + if (rte_intr_callback_register(test_intr_handle, + test_interrupt_callback, test_intr_handle) < 0) { printf("fail to register callback\n"); return -1; } @@ -371,9 +429,9 @@ test_interrupt_full_path_check(enum test_interrupt_handle_type intr_type) rte_delay_ms(TEST_INTERRUPT_CHECK_INTERVAL); while ((count = - rte_intr_callback_unregister(&test_intr_handle, + rte_intr_callback_unregister(test_intr_handle, test_interrupt_callback, - &test_intr_handle)) < 0) { + test_intr_handle)) < 0) { if (count != -EAGAIN) return -1; } @@ -396,7 +454,7 @@ static int test_interrupt(void) { int ret = -1; - struct rte_intr_handle test_intr_handle; + struct rte_intr_handle *test_intr_handle; if (test_interrupt_init() < 0) { printf("fail to initialize for testing interrupt\n"); @@ -444,17 +502,20 @@ test_interrupt(void) } /* check if it will fail to register cb with invalid intr_handle */ - test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID]; - if (rte_intr_callback_register(&test_intr_handle, - test_interrupt_callback, &test_intr_handle) == 0) { + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_INVALID); + if (rte_intr_callback_register(test_intr_handle, + test_interrupt_callback, test_intr_handle) == 0) { printf("unexpectedly register successfully with invalid " "intr_handle\n"); goto out; } /* check if it will fail to register without callback */ - test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID]; - if (rte_intr_callback_register(&test_intr_handle, NULL, &test_intr_handle) == 0) { + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_VALID); + if (rte_intr_callback_register(test_intr_handle, NULL, + test_intr_handle) == 0) { printf("unexpectedly register successfully with " "null callback\n"); goto out; @@ -469,39 +530,41 @@ test_interrupt(void) } /* check if it will fail to unregister cb with invalid intr_handle */ - test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID]; - if (rte_intr_callback_unregister(&test_intr_handle, - test_interrupt_callback, &test_intr_handle) > 0) { + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_INVALID); + if (rte_intr_callback_unregister(test_intr_handle, + test_interrupt_callback, test_intr_handle) > 0) { printf("unexpectedly unregister successfully with " "invalid intr_handle\n"); goto out; } /* check if it is ok to register the same intr_handle twice */ - test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID]; - if (rte_intr_callback_register(&test_intr_handle, - test_interrupt_callback, &test_intr_handle) < 0) { + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_VALID); + if (rte_intr_callback_register(test_intr_handle, + test_interrupt_callback, test_intr_handle) < 0) { printf("it fails to register test_interrupt_callback\n"); goto out; } - if (rte_intr_callback_register(&test_intr_handle, - test_interrupt_callback_1, &test_intr_handle) < 0) { + if (rte_intr_callback_register(test_intr_handle, + test_interrupt_callback_1, test_intr_handle) < 0) { printf("it fails to register test_interrupt_callback_1\n"); goto out; } /* check if it will fail to unregister with invalid parameter */ - if (rte_intr_callback_unregister(&test_intr_handle, + if (rte_intr_callback_unregister(test_intr_handle, test_interrupt_callback, (void *)0xff) != 0) { printf("unexpectedly unregisters successfully with " "invalid arg\n"); goto out; } - if (rte_intr_callback_unregister(&test_intr_handle, - test_interrupt_callback, &test_intr_handle) <= 0) { + if (rte_intr_callback_unregister(test_intr_handle, + test_interrupt_callback, test_intr_handle) <= 0) { printf("it fails to unregister test_interrupt_callback\n"); goto out; } - if (rte_intr_callback_unregister(&test_intr_handle, + if (rte_intr_callback_unregister(test_intr_handle, test_interrupt_callback_1, (void *)-1) <= 0) { printf("it fails to unregister test_interrupt_callback_1 " "for all\n"); @@ -528,28 +591,32 @@ test_interrupt(void) out: printf("Clearing for interrupt tests\n"); /* clear registered callbacks */ - test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID]; - rte_intr_callback_unregister(&test_intr_handle, + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_VALID); + rte_intr_callback_unregister(test_intr_handle, test_interrupt_callback, (void *)-1); - rte_intr_callback_unregister(&test_intr_handle, + rte_intr_callback_unregister(test_intr_handle, test_interrupt_callback_1, (void *)-1); - test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO]; - rte_intr_callback_unregister(&test_intr_handle, + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_VALID_UIO); + rte_intr_callback_unregister(test_intr_handle, test_interrupt_callback, (void *)-1); - rte_intr_callback_unregister(&test_intr_handle, + rte_intr_callback_unregister(test_intr_handle, test_interrupt_callback_1, (void *)-1); - test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM]; - rte_intr_callback_unregister(&test_intr_handle, + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_VALID_ALARM); + rte_intr_callback_unregister(test_intr_handle, test_interrupt_callback, (void *)-1); - rte_intr_callback_unregister(&test_intr_handle, + rte_intr_callback_unregister(test_intr_handle, test_interrupt_callback_1, (void *)-1); - test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT]; - rte_intr_callback_unregister(&test_intr_handle, + test_intr_handle = rte_intr_handle_instance_index_get(intr_handles, + TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT); + rte_intr_callback_unregister(test_intr_handle, test_interrupt_callback, (void *)-1); - rte_intr_callback_unregister(&test_intr_handle, + rte_intr_callback_unregister(test_intr_handle, test_interrupt_callback_1, (void *)-1); rte_delay_ms(2 * TEST_INTERRUPT_CHECK_INTERVAL); From patchwork Fri Sep 3 12:41:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 97937 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8E4FAA0C54; Fri, 3 Sep 2021 14:42:45 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4EA7A41121; Fri, 3 Sep 2021 14:42:37 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id AD6BD410FE for ; Fri, 3 Sep 2021 14:42:35 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 1837tQPh012095; Fri, 3 Sep 2021 05:42:29 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=R1HTk/oUDs8BlysHbPiAWhyzZHs1bdQkvZU+6EKnAe0=; b=hlSXuq1vB8LeMDOlDJx9JOk7KmWBGrtg+po0AsrXmvR1bDbZWZmqF7gCgv8KbQxnsnDT ugw1S396zGL9w1nnZLCjkAi7bjcZSZgxLSPxcl8//EWEexhLbNO98Sp+BBRx73QBoF5B 0+U3q6ydYJYsgx8fHlAJ5Jxl44MOY/lhiGSEH5HLO5bbD1SFzl+Xkk7cGglZ5msOphFD fQfdDVFEsGHcP5ccpTz6yGNnChxg8tiyW+CKHZJf91QbpEDpji2USRkzEnyznmPdiHkS aB87U3F+PJ3IYczW7rS86k1wdj7VuNgip6KQnmB9fE0Hk79KuNFOzVe89gdc7Ak9izyo oA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 3aufr890w4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 03 Sep 2021 05:42:27 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 3 Sep 2021 05:42:24 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 3 Sep 2021 05:42:24 -0700 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 78C8B5B6946; Fri, 3 Sep 2021 05:42:06 -0700 (PDT) From: Harman Kalra To: , Nicolas Chautru , Parav Pandit , Xueming Li , Hemant Agrawal , Sachin Saxena , Rosen Xu , Ferruh Yigit , Anatoly Burakov , Stephen Hemminger , "Long Li" , Nithin Dabilpuram , "Kiran Kumar K" , Sunil Kumar Kori , Satha Rao , Jerin Jacob , "Ankur Dwivedi" , Anoob Joseph , "Pavan Nikhilesh" , Igor Russkikh , Steven Webster , Matt Peters , Somalapuram Amaranath , "Rasesh Mody" , Shahed Shaikh , Ajit Khaparde , Somnath Kotur , Haiyue Wang , Marcin Wojtas , "Michal Krawczyk" , Shai Brandes , "Evgeny Schemeilin" , Igor Chauskin , "John Daley" , Hyong Youb Kim , Gaetan Rivet , Qi Zhang , Xiao Wang , Ziyang Xuan , Xiaoyun Wang , Guoyang Zhou , "Min Hu (Connor)" , Yisen Zhuang , Lijun Ou , Beilei Xing , "Jingjing Wu" , Qiming Yang , Andrew Boyer , Jakub Grajciar , Matan Azrad , Shahaf Shuler , Viacheslav Ovsiienko , Heinrich Kuhn , "Jiawen Wu" , Devendra Singh Rawat , Andrew Rybchenko , Keith Wiles , Maciej Czekaj , Jian Wang , Maxime Coquelin , Chenbo Xia , Yong Wang , "Tianfei zhang" , Xiaoyun Li , "Guy Kaneti" , Bruce Richardson , Thomas Monjalon CC: Harman Kalra Date: Fri, 3 Sep 2021 18:11:00 +0530 Message-ID: <20210903124102.47425-6-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20210903124102.47425-1-hkalra@marvell.com> References: <20210826145726.102081-1-hkalra@marvell.com> <20210903124102.47425-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: FqbGXa_Jc0sqr2iIN1ahrqQ1KJ1aKvYy X-Proofpoint-GUID: FqbGXa_Jc0sqr2iIN1ahrqQ1KJ1aKvYy X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-03_03,2021-09-03_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH v1 5/7] drivers: remove direct access to interrupt handle fields X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Removing direct access to interrupt handle structure fields, rather use respective get set APIs for the same. Making changes to all the drivers and libraries access the interrupt handle fields. Signed-off-by: Harman Kalra --- drivers/baseband/acc100/rte_acc100_pmd.c | 18 +-- .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 13 +- drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 14 ++- drivers/bus/auxiliary/auxiliary_common.c | 2 + drivers/bus/auxiliary/linux/auxiliary.c | 11 ++ drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +- drivers/bus/dpaa/dpaa_bus.c | 28 ++++- drivers/bus/dpaa/rte_dpaa_bus.h | 2 +- drivers/bus/fslmc/fslmc_bus.c | 17 ++- drivers/bus/fslmc/fslmc_vfio.c | 32 +++-- drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 21 +++- drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +- drivers/bus/fslmc/rte_fslmc.h | 2 +- drivers/bus/ifpga/ifpga_bus.c | 16 ++- drivers/bus/ifpga/rte_bus_ifpga.h | 2 +- drivers/bus/pci/bsd/pci.c | 21 ++-- drivers/bus/pci/linux/pci.c | 4 +- drivers/bus/pci/linux/pci_uio.c | 73 +++++++---- drivers/bus/pci/linux/pci_vfio.c | 108 ++++++++++------ drivers/bus/pci/pci_common.c | 29 ++++- drivers/bus/pci/pci_common_uio.c | 21 ++-- drivers/bus/pci/rte_bus_pci.h | 4 +- drivers/bus/vmbus/linux/vmbus_bus.c | 7 ++ drivers/bus/vmbus/linux/vmbus_uio.c | 37 ++++-- drivers/bus/vmbus/rte_bus_vmbus.h | 2 +- drivers/bus/vmbus/vmbus_common_uio.c | 24 ++-- drivers/common/cnxk/roc_cpt.c | 8 +- drivers/common/cnxk/roc_dev.c | 14 +-- drivers/common/cnxk/roc_irq.c | 106 +++++++++------- drivers/common/cnxk/roc_nix_irq.c | 37 +++--- drivers/common/cnxk/roc_npa.c | 2 +- drivers/common/cnxk/roc_platform.h | 34 +++++ drivers/common/cnxk/roc_sso.c | 4 +- drivers/common/cnxk/roc_tim.c | 4 +- drivers/common/octeontx2/otx2_dev.c | 14 +-- drivers/common/octeontx2/otx2_irq.c | 117 ++++++++++-------- .../octeontx2/otx2_cryptodev_hw_access.c | 4 +- drivers/event/octeontx2/otx2_evdev_irq.c | 12 +- drivers/mempool/octeontx2/otx2_mempool.c | 2 +- drivers/net/atlantic/atl_ethdev.c | 22 ++-- drivers/net/avp/avp_ethdev.c | 8 +- drivers/net/axgbe/axgbe_ethdev.c | 12 +- drivers/net/axgbe/axgbe_mdio.c | 6 +- drivers/net/bnx2x/bnx2x_ethdev.c | 10 +- drivers/net/bnxt/bnxt_ethdev.c | 32 +++-- drivers/net/bnxt/bnxt_irq.c | 4 +- drivers/net/dpaa/dpaa_ethdev.c | 47 ++++--- drivers/net/dpaa2/dpaa2_ethdev.c | 10 +- drivers/net/e1000/em_ethdev.c | 24 ++-- drivers/net/e1000/igb_ethdev.c | 84 ++++++------- drivers/net/ena/ena_ethdev.c | 36 +++--- drivers/net/enic/enic_main.c | 27 ++-- drivers/net/failsafe/failsafe.c | 24 +++- drivers/net/failsafe/failsafe_intr.c | 45 ++++--- drivers/net/failsafe/failsafe_ops.c | 23 +++- drivers/net/failsafe/failsafe_private.h | 2 +- drivers/net/fm10k/fm10k_ethdev.c | 32 ++--- drivers/net/hinic/hinic_pmd_ethdev.c | 10 +- drivers/net/hns3/hns3_ethdev.c | 50 ++++---- drivers/net/hns3/hns3_ethdev_vf.c | 57 +++++---- drivers/net/hns3/hns3_rxtx.c | 2 +- drivers/net/i40e/i40e_ethdev.c | 55 ++++---- drivers/net/i40e/i40e_ethdev_vf.c | 43 +++---- drivers/net/iavf/iavf_ethdev.c | 41 +++--- drivers/net/iavf/iavf_vchnl.c | 4 +- drivers/net/ice/ice_dcf.c | 10 +- drivers/net/ice/ice_dcf_ethdev.c | 23 ++-- drivers/net/ice/ice_ethdev.c | 51 ++++---- drivers/net/igc/igc_ethdev.c | 47 ++++--- drivers/net/ionic/ionic_ethdev.c | 12 +- drivers/net/ixgbe/ixgbe_ethdev.c | 70 +++++------ drivers/net/memif/memif_socket.c | 114 ++++++++++++----- drivers/net/memif/memif_socket.h | 4 +- drivers/net/memif/rte_eth_memif.c | 63 ++++++++-- drivers/net/memif/rte_eth_memif.h | 2 +- drivers/net/mlx4/mlx4.c | 20 ++- drivers/net/mlx4/mlx4.h | 2 +- drivers/net/mlx4/mlx4_intr.c | 48 ++++--- drivers/net/mlx5/linux/mlx5_os.c | 56 ++++++--- drivers/net/mlx5/linux/mlx5_socket.c | 26 ++-- drivers/net/mlx5/mlx5.h | 6 +- drivers/net/mlx5/mlx5_rxq.c | 43 ++++--- drivers/net/mlx5/mlx5_trigger.c | 4 +- drivers/net/mlx5/mlx5_txpp.c | 27 ++-- drivers/net/netvsc/hn_ethdev.c | 4 +- drivers/net/nfp/nfp_common.c | 28 +++-- drivers/net/nfp/nfp_ethdev.c | 13 +- drivers/net/nfp/nfp_ethdev_vf.c | 13 +- drivers/net/ngbe/ngbe_ethdev.c | 31 +++-- drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +++--- drivers/net/qede/qede_ethdev.c | 16 +-- drivers/net/sfc/sfc_intr.c | 29 ++--- drivers/net/tap/rte_eth_tap.c | 37 ++++-- drivers/net/tap/rte_eth_tap.h | 2 +- drivers/net/tap/tap_intr.c | 33 +++-- drivers/net/thunderx/nicvf_ethdev.c | 13 ++ drivers/net/thunderx/nicvf_struct.h | 2 +- drivers/net/txgbe/txgbe_ethdev.c | 36 +++--- drivers/net/txgbe/txgbe_ethdev_vf.c | 35 +++--- drivers/net/vhost/rte_eth_vhost.c | 78 +++++++----- drivers/net/virtio/virtio_ethdev.c | 17 +-- .../net/virtio/virtio_user/virtio_user_dev.c | 53 +++++--- drivers/net/vmxnet3/vmxnet3_ethdev.c | 45 ++++--- drivers/raw/ifpga/ifpga_rawdev.c | 42 +++++-- drivers/raw/ntb/ntb.c | 10 +- .../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +- drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +- drivers/vdpa/mlx5/mlx5_vdpa.c | 11 ++ drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +- drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 ++-- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 46 ++++--- lib/bbdev/rte_bbdev.c | 4 +- lib/eal/freebsd/eal_alarm.c | 49 +++++++- lib/eal/include/rte_eal_trace.h | 24 +--- lib/eal/linux/eal_alarm.c | 31 +++-- lib/eal/linux/eal_dev.c | 65 ++++++---- lib/ethdev/ethdev_pci.h | 2 +- lib/ethdev/rte_ethdev.c | 14 +-- 118 files changed, 1879 insertions(+), 1182 deletions(-) diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c index 68ba523ea9..5097b240ee 100644 --- a/drivers/baseband/acc100/rte_acc100_pmd.c +++ b/drivers/baseband/acc100/rte_acc100_pmd.c @@ -720,8 +720,10 @@ acc100_intr_enable(struct rte_bbdev *dev) struct acc100_device *d = dev->data->dev_private; /* Only MSI are currently supported */ - if (dev->intr_handle->type == RTE_INTR_HANDLE_VFIO_MSI || - dev->intr_handle->type == RTE_INTR_HANDLE_UIO) { + if (rte_intr_handle_type_get(dev->intr_handle) == + RTE_INTR_HANDLE_VFIO_MSI || + rte_intr_handle_type_get(dev->intr_handle) == + RTE_INTR_HANDLE_UIO) { ret = allocate_info_ring(dev); if (ret < 0) { @@ -1096,8 +1098,9 @@ acc100_queue_intr_enable(struct rte_bbdev *dev, uint16_t queue_id) { struct acc100_queue *q = dev->data->queues[queue_id].queue_private; - if (dev->intr_handle->type != RTE_INTR_HANDLE_VFIO_MSI && - dev->intr_handle->type != RTE_INTR_HANDLE_UIO) + if (rte_intr_handle_type_get(dev->intr_handle) != + RTE_INTR_HANDLE_VFIO_MSI && + rte_intr_handle_type_get(dev->intr_handle) != RTE_INTR_HANDLE_UIO) return -ENOTSUP; q->irq_enable = 1; @@ -1109,8 +1112,9 @@ acc100_queue_intr_disable(struct rte_bbdev *dev, uint16_t queue_id) { struct acc100_queue *q = dev->data->queues[queue_id].queue_private; - if (dev->intr_handle->type != RTE_INTR_HANDLE_VFIO_MSI && - dev->intr_handle->type != RTE_INTR_HANDLE_UIO) + if (rte_intr_handle_type_get(dev->intr_handle) != + RTE_INTR_HANDLE_VFIO_MSI && + rte_intr_handle_type_get(dev->intr_handle) != RTE_INTR_HANDLE_UIO) return -ENOTSUP; q->irq_enable = 0; @@ -4178,7 +4182,7 @@ static int acc100_pci_probe(struct rte_pci_driver *pci_drv, /* Fill HW specific part of device structure */ bbdev->device = &pci_dev->device; - bbdev->intr_handle = &pci_dev->intr_handle; + bbdev->intr_handle = pci_dev->intr_handle; bbdev->data->socket_id = pci_dev->device.numa_node; /* Invoke ACC100 device initialization function */ diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c index 6485cc824a..34a6da9a46 100644 --- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c +++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c @@ -743,12 +743,13 @@ fpga_intr_enable(struct rte_bbdev *dev) * invoked when any FPGA queue issues interrupt. */ for (i = 0; i < FPGA_NUM_INTR_VEC; ++i) - dev->intr_handle->efds[i] = dev->intr_handle->fd; + if (rte_intr_handle_efds_index_set(dev->intr_handle, i, + rte_intr_handle_fd_get(dev->intr_handle))) + return -rte_errno; - if (!dev->intr_handle->intr_vec) { - dev->intr_handle->intr_vec = rte_zmalloc("intr_vec", - dev->data->num_queues * sizeof(int), 0); - if (!dev->intr_handle->intr_vec) { + if (!rte_intr_handle_vec_list_base(dev->intr_handle)) { + if (rte_intr_handle_vec_list_alloc(dev->intr_handle, "intr_vec", + dev->data->num_queues)) { rte_bbdev_log(ERR, "Failed to allocate %u vectors", dev->data->num_queues); return -ENOMEM; @@ -1879,7 +1880,7 @@ fpga_5gnr_fec_probe(struct rte_pci_driver *pci_drv, /* Fill HW specific part of device structure */ bbdev->device = &pci_dev->device; - bbdev->intr_handle = &pci_dev->intr_handle; + bbdev->intr_handle = pci_dev->intr_handle; bbdev->data->socket_id = pci_dev->device.numa_node; /* Invoke FEC FPGA device initialization function */ diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c index 350c4248eb..0a718fbcd9 100644 --- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c +++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c @@ -1014,18 +1014,20 @@ fpga_intr_enable(struct rte_bbdev *dev) * invoked when any FPGA queue issues interrupt. */ for (i = 0; i < FPGA_NUM_INTR_VEC; ++i) - dev->intr_handle->efds[i] = dev->intr_handle->fd; + if (rte_intr_handle_efds_index_set(dev->intr_handle, i, + rte_intr_handle_fd_get(dev->intr_handle))) + return -rte_errno; - if (!dev->intr_handle->intr_vec) { - dev->intr_handle->intr_vec = rte_zmalloc("intr_vec", - dev->data->num_queues * sizeof(int), 0); - if (!dev->intr_handle->intr_vec) { + if (!rte_intr_handle_vec_list_base(dev->intr_handle)) { + if (rte_intr_handle_vec_list_alloc(dev->intr_handle, "intr_vec", + dev->data->num_queues)) { rte_bbdev_log(ERR, "Failed to allocate %u vectors", dev->data->num_queues); return -ENOMEM; } } + ret = rte_intr_enable(dev->intr_handle); if (ret < 0) { rte_bbdev_log(ERR, @@ -2369,7 +2371,7 @@ fpga_lte_fec_probe(struct rte_pci_driver *pci_drv, /* Fill HW specific part of device structure */ bbdev->device = &pci_dev->device; - bbdev->intr_handle = &pci_dev->intr_handle; + bbdev->intr_handle = pci_dev->intr_handle; bbdev->data->socket_id = pci_dev->device.numa_node; /* Invoke FEC FPGA device initialization function */ diff --git a/drivers/bus/auxiliary/auxiliary_common.c b/drivers/bus/auxiliary/auxiliary_common.c index 603b6fdc02..7298a03d86 100644 --- a/drivers/bus/auxiliary/auxiliary_common.c +++ b/drivers/bus/auxiliary/auxiliary_common.c @@ -320,6 +320,8 @@ auxiliary_unplug(struct rte_device *dev) if (ret == 0) { rte_auxiliary_remove_device(adev); rte_devargs_remove(dev->devargs); + if (adev->intr_handle) + rte_intr_handle_instance_free(adev->intr_handle); free(adev); } return ret; diff --git a/drivers/bus/auxiliary/linux/auxiliary.c b/drivers/bus/auxiliary/linux/auxiliary.c index 9bd4ee3295..236fdc9bf7 100644 --- a/drivers/bus/auxiliary/linux/auxiliary.c +++ b/drivers/bus/auxiliary/linux/auxiliary.c @@ -39,6 +39,15 @@ auxiliary_scan_one(const char *dirname, const char *name) dev->device.name = dev->name; dev->device.bus = &auxiliary_bus.bus; + /* Allocate interrupt instance */ + dev->intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + false); + if (!dev->intr_handle) { + free(dev); + return -1; + } + /* Get NUMA node, default to 0 if not present */ snprintf(filename, sizeof(filename), "%s/%s/numa_node", dirname, name); @@ -67,6 +76,8 @@ auxiliary_scan_one(const char *dirname, const char *name) rte_devargs_remove(dev2->device.devargs); auxiliary_on_scan(dev2); } + if (dev->intr_handle) + rte_intr_handle_instance_free(dev->intr_handle); free(dev); } return 0; diff --git a/drivers/bus/auxiliary/rte_bus_auxiliary.h b/drivers/bus/auxiliary/rte_bus_auxiliary.h index 2462bad2ba..7642964622 100644 --- a/drivers/bus/auxiliary/rte_bus_auxiliary.h +++ b/drivers/bus/auxiliary/rte_bus_auxiliary.h @@ -116,7 +116,7 @@ struct rte_auxiliary_device { TAILQ_ENTRY(rte_auxiliary_device) next; /**< Next probed device. */ struct rte_device device; /**< Inherit core device */ char name[RTE_DEV_NAME_MAX_LEN + 1]; /**< ASCII device name */ - struct rte_intr_handle intr_handle; /**< Interrupt handle */ + struct rte_intr_handle *intr_handle; /**< Interrupt handle */ struct rte_auxiliary_driver *driver; /**< Device driver */ }; diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c index e499305d85..52b2a4883e 100644 --- a/drivers/bus/dpaa/dpaa_bus.c +++ b/drivers/bus/dpaa/dpaa_bus.c @@ -172,6 +172,15 @@ dpaa_create_device_list(void) dev->device.bus = &rte_dpaa_bus.bus; + /* Allocate interrupt handle instance */ + dev->intr_handle = rte_intr_handle_instance_alloc( + RTE_INTR_HANDLE_DEFAULT_SIZE, false); + if (!dev->intr_handle) { + DPAA_BUS_LOG(ERR, "Failed to allocate intr handle"); + ret = -ENOMEM; + goto cleanup; + } + cfg = &dpaa_netcfg->port_cfg[i]; fman_intf = cfg->fman_if; @@ -214,6 +223,15 @@ dpaa_create_device_list(void) goto cleanup; } + /* Allocate interrupt handle instance */ + dev->intr_handle = rte_intr_handle_instance_alloc( + RTE_INTR_HANDLE_DEFAULT_SIZE, false); + if (!dev->intr_handle) { + DPAA_BUS_LOG(ERR, "Failed to allocate intr handle"); + ret = -ENOMEM; + goto cleanup; + } + dev->device_type = FSL_DPAA_CRYPTO; dev->id.dev_id = rte_dpaa_bus.device_count + i; @@ -247,6 +265,7 @@ dpaa_clean_device_list(void) TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) { TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next); + rte_intr_handle_instance_free(dev->intr_handle); free(dev); dev = NULL; } @@ -559,8 +578,11 @@ static int rte_dpaa_setup_intr(struct rte_intr_handle *intr_handle) return errno; } - intr_handle->fd = fd; - intr_handle->type = RTE_INTR_HANDLE_EXT; + if (rte_intr_handle_fd_set(intr_handle, fd)) + return -rte_errno; + + if (rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_EXT)) + return -rte_errno; return 0; } @@ -612,7 +634,7 @@ rte_dpaa_bus_probe(void) TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) { if (dev->device_type == FSL_DPAA_ETH) { - ret = rte_dpaa_setup_intr(&dev->intr_handle); + ret = rte_dpaa_setup_intr(dev->intr_handle); if (ret) DPAA_BUS_ERR("Error setting up interrupt.\n"); } diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h index 48d5cf4625..f32cb038b4 100644 --- a/drivers/bus/dpaa/rte_dpaa_bus.h +++ b/drivers/bus/dpaa/rte_dpaa_bus.h @@ -101,7 +101,7 @@ struct rte_dpaa_device { }; struct rte_dpaa_driver *driver; struct dpaa_device_id id; - struct rte_intr_handle intr_handle; + struct rte_intr_handle *intr_handle; enum rte_dpaa_type device_type; /**< Ethernet or crypto type device */ char name[RTE_ETH_NAME_MAX_LEN]; }; diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c index becc455f6b..3a1b0d0a45 100644 --- a/drivers/bus/fslmc/fslmc_bus.c +++ b/drivers/bus/fslmc/fslmc_bus.c @@ -47,6 +47,8 @@ cleanup_fslmc_device_list(void) TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) { TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next); + if (dev->intr_handle) + rte_intr_handle_instance_free(dev->intr_handle); free(dev); dev = NULL; } @@ -160,6 +162,16 @@ scan_one_fslmc_device(char *dev_name) dev->device.bus = &rte_fslmc_bus.bus; + /* Allocate interrupt instance */ + dev->intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + false); + if (!dev->intr_handle) { + DPAA2_BUS_ERR("Failed to allocate intr handle"); + ret = -ENOMEM; + goto cleanup; + } + /* Parse the device name and ID */ t_ptr = strtok(dup_dev_name, "."); if (!t_ptr) { @@ -220,8 +232,11 @@ scan_one_fslmc_device(char *dev_name) cleanup: if (dup_dev_name) free(dup_dev_name); - if (dev) + if (dev) { + if (dev->intr_handle) + rte_intr_handle_instance_free(dev->intr_handle); free(dev); + } return ret; } diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c index c8373e627a..b002b5e443 100644 --- a/drivers/bus/fslmc/fslmc_vfio.c +++ b/drivers/bus/fslmc/fslmc_vfio.c @@ -599,7 +599,7 @@ int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index) int len, ret; char irq_set_buf[IRQ_SET_BUF_LEN]; struct vfio_irq_set *irq_set; - int *fd_ptr; + int *fd_ptr, vfio_dev_fd; len = sizeof(irq_set_buf); @@ -611,12 +611,14 @@ int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index) irq_set->index = index; irq_set->start = 0; fd_ptr = (int *)&irq_set->data; - *fd_ptr = intr_handle->fd; + *fd_ptr = rte_intr_handle_fd_get(intr_handle); - ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle); + ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { DPAA2_BUS_ERR("Error:dpaa2 SET IRQs fd=%d, err = %d(%s)", - intr_handle->fd, errno, strerror(errno)); + rte_intr_handle_fd_get(intr_handle), errno, + strerror(errno)); return ret; } @@ -627,7 +629,7 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index) { struct vfio_irq_set *irq_set; char irq_set_buf[IRQ_SET_BUF_LEN]; - int len, ret; + int len, ret, vfio_dev_fd; len = sizeof(struct vfio_irq_set); @@ -638,11 +640,12 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index) irq_set->start = 0; irq_set->count = 0; - ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle); + ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) DPAA2_BUS_ERR( "Error disabling dpaa2 interrupts for fd %d", - intr_handle->fd); + rte_intr_handle_fd_get(intr_handle)); return ret; } @@ -684,9 +687,16 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle, return -1; } - intr_handle->fd = fd; - intr_handle->type = RTE_INTR_HANDLE_VFIO_MSI; - intr_handle->vfio_dev_fd = vfio_dev_fd; + if (rte_intr_handle_fd_set(intr_handle, fd)) + return -rte_errno; + + if (rte_intr_handle_type_set(intr_handle, + RTE_INTR_HANDLE_VFIO_MSI)) + return -rte_errno; + + if (rte_intr_handle_dev_fd_set(intr_handle, vfio_dev_fd)) + return -rte_errno; + return 0; } @@ -711,7 +721,7 @@ fslmc_process_iodevices(struct rte_dpaa2_device *dev) switch (dev->dev_type) { case DPAA2_ETH: - rte_dpaa2_vfio_setup_intr(&dev->intr_handle, dev_fd, + rte_dpaa2_vfio_setup_intr(dev->intr_handle, dev_fd, device_info.num_irqs); break; case DPAA2_CON: diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c index 1a1e437ed1..479d3d71d7 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c +++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c @@ -176,7 +176,7 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev) int threshold = 0x3, timeout = 0xFF; dpio_epoll_fd = epoll_create(1); - ret = rte_dpaa2_intr_enable(&dpio_dev->intr_handle, 0); + ret = rte_dpaa2_intr_enable(dpio_dev->intr_handle, 0); if (ret) { DPAA2_BUS_ERR("Interrupt registeration failed"); return -1; @@ -195,7 +195,7 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev) qbman_swp_dqrr_thrshld_write(dpio_dev->sw_portal, threshold); qbman_swp_intr_timeout_write(dpio_dev->sw_portal, timeout); - eventfd = dpio_dev->intr_handle.fd; + eventfd = rte_intr_handle_fd_get(dpio_dev->intr_handle); epoll_ev.events = EPOLLIN | EPOLLPRI | EPOLLET; epoll_ev.data.fd = eventfd; @@ -213,7 +213,7 @@ static void dpaa2_dpio_intr_deinit(struct dpaa2_dpio_dev *dpio_dev) { int ret; - ret = rte_dpaa2_intr_disable(&dpio_dev->intr_handle, 0); + ret = rte_dpaa2_intr_disable(dpio_dev->intr_handle, 0); if (ret) DPAA2_BUS_ERR("DPIO interrupt disable failed"); @@ -388,6 +388,15 @@ dpaa2_create_dpio_device(int vdev_fd, /* Using single portal for all devices */ dpio_dev->mc_portal = dpaa2_get_mcp_ptr(MC_PORTAL_INDEX); + /* Allocate interrupt instance */ + dpio_dev->intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + true); + if (!dpio_dev->intr_handle) { + DPAA2_BUS_ERR("Failed to allocate intr handle"); + goto err; + } + dpio_dev->dpio = rte_zmalloc(NULL, sizeof(struct fsl_mc_io), RTE_CACHE_LINE_SIZE); if (!dpio_dev->dpio) { @@ -490,7 +499,7 @@ dpaa2_create_dpio_device(int vdev_fd, io_space_count++; dpio_dev->index = io_space_count; - if (rte_dpaa2_vfio_setup_intr(&dpio_dev->intr_handle, vdev_fd, 1)) { + if (rte_dpaa2_vfio_setup_intr(dpio_dev->intr_handle, vdev_fd, 1)) { DPAA2_BUS_ERR("Fail to setup interrupt for %d", dpio_dev->hw_id); goto err; @@ -538,6 +547,8 @@ dpaa2_create_dpio_device(int vdev_fd, rte_free(dpio_dev->dpio); } + if (dpio_dev->intr_handle) + rte_intr_handle_instance_free(dpio_dev->intr_handle); rte_free(dpio_dev); /* For each element in the list, cleanup */ @@ -549,6 +560,8 @@ dpaa2_create_dpio_device(int vdev_fd, dpio_dev->token); rte_free(dpio_dev->dpio); } + if (dpio_dev->intr_handle) + rte_intr_handle_instance_free(dpio_dev->intr_handle); rte_free(dpio_dev); } diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h index 037c841ef5..b1bba1ac36 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h +++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h @@ -116,7 +116,7 @@ struct dpaa2_dpio_dev { uintptr_t qbman_portal_ci_paddr; /**< Physical address of Cache Inhibit Area */ uintptr_t ci_size; /**< Size of the CI region */ - struct rte_intr_handle intr_handle; /* Interrupt related info */ + struct rte_intr_handle *intr_handle; /* Interrupt related info */ int32_t epoll_fd; /**< File descriptor created for interrupt polling */ int32_t hw_id; /**< An unique ID of this DPIO device instance */ struct dpaa2_portal_dqrr dpaa2_held_bufs; diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h index 37d45dffe5..e46110b3ea 100644 --- a/drivers/bus/fslmc/rte_fslmc.h +++ b/drivers/bus/fslmc/rte_fslmc.h @@ -125,7 +125,7 @@ struct rte_dpaa2_device { }; enum rte_dpaa2_dev_type dev_type; /**< Device Type */ uint16_t object_id; /**< DPAA2 Object ID */ - struct rte_intr_handle intr_handle; /**< Interrupt handle */ + struct rte_intr_handle *intr_handle; /**< Interrupt handle */ struct rte_dpaa2_driver *driver; /**< Associated driver */ char name[FSLMC_OBJECT_MAX_LEN]; /**< DPAA2 Object name*/ }; diff --git a/drivers/bus/ifpga/ifpga_bus.c b/drivers/bus/ifpga/ifpga_bus.c index 62887da2d8..bebb584796 100644 --- a/drivers/bus/ifpga/ifpga_bus.c +++ b/drivers/bus/ifpga/ifpga_bus.c @@ -161,6 +161,15 @@ ifpga_scan_one(struct rte_rawdev *rawdev, afu_dev->id.uuid.uuid_high = 0; afu_dev->id.port = afu_pr_conf.afu_id.port; + /* Allocate interrupt instance */ + afu_dev->intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + false); + if (!afu_dev->intr_handle) { + IFPGA_BUS_ERR("Failed to allocate intr handle"); + goto end; + } + if (rawdev->dev_ops && rawdev->dev_ops->dev_info_get) rawdev->dev_ops->dev_info_get(rawdev, afu_dev, sizeof(*afu_dev)); @@ -189,8 +198,11 @@ ifpga_scan_one(struct rte_rawdev *rawdev, rte_kvargs_free(kvlist); if (path) free(path); - if (afu_dev) + if (afu_dev) { + if (afu_dev->intr_handle) + rte_intr_handle_instance_free(afu_dev->intr_handle); free(afu_dev); + } return NULL; } @@ -396,6 +408,8 @@ ifpga_unplug(struct rte_device *dev) TAILQ_REMOVE(&ifpga_afu_dev_list, afu_dev, next); rte_devargs_remove(dev->devargs); + if (afu_dev->intr_handle) + rte_intr_handle_instance_free(afu_dev->intr_handle); free(afu_dev); return 0; diff --git a/drivers/bus/ifpga/rte_bus_ifpga.h b/drivers/bus/ifpga/rte_bus_ifpga.h index b43084155a..38caaf2e8f 100644 --- a/drivers/bus/ifpga/rte_bus_ifpga.h +++ b/drivers/bus/ifpga/rte_bus_ifpga.h @@ -79,7 +79,7 @@ struct rte_afu_device { struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE]; /**< AFU Memory Resource */ struct rte_afu_shared shared; - struct rte_intr_handle intr_handle; /**< Interrupt handle */ + struct rte_intr_handle *intr_handle; /**< Interrupt handle */ struct rte_afu_driver *driver; /**< Associated driver */ char path[IFPGA_BUS_BITSTREAM_PATH_MAX_LEN]; } __rte_packed; diff --git a/drivers/bus/pci/bsd/pci.c b/drivers/bus/pci/bsd/pci.c index d189bff311..8a84eb15ea 100644 --- a/drivers/bus/pci/bsd/pci.c +++ b/drivers/bus/pci/bsd/pci.c @@ -95,10 +95,11 @@ pci_uio_free_resource(struct rte_pci_device *dev, { rte_free(uio_res); - if (dev->intr_handle.fd) { - close(dev->intr_handle.fd); - dev->intr_handle.fd = -1; - dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN; + if (rte_intr_handle_fd_get(dev->intr_handle)) { + close(rte_intr_handle_fd_get(dev->intr_handle)); + rte_intr_handle_fd_set(dev->intr_handle, -1); + rte_intr_handle_type_set(dev->intr_handle, + RTE_INTR_HANDLE_UNKNOWN); } } @@ -121,13 +122,19 @@ pci_uio_alloc_resource(struct rte_pci_device *dev, } /* save fd if in primary process */ - dev->intr_handle.fd = open(devname, O_RDWR); - if (dev->intr_handle.fd < 0) { + if (rte_intr_handle_fd_set(dev->intr_handle, open(devname, O_RDWR))) { + RTE_LOG(WARNING, EAL, "Failed to save fd"); + goto error; + } + + if (rte_intr_handle_fd_get(dev->intr_handle) < 0) { RTE_LOG(ERR, EAL, "Cannot open %s: %s\n", devname, strerror(errno)); goto error; } - dev->intr_handle.type = RTE_INTR_HANDLE_UIO; + + if (rte_intr_handle_type_set(dev->intr_handle, RTE_INTR_HANDLE_UIO)) + goto error; /* allocate the mapping details for secondary processes*/ *uio_res = rte_zmalloc("UIO_RES", sizeof(**uio_res), 0); diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c index 4d261b55ee..e521459870 100644 --- a/drivers/bus/pci/linux/pci.c +++ b/drivers/bus/pci/linux/pci.c @@ -645,7 +645,7 @@ int rte_pci_read_config(const struct rte_pci_device *device, void *buf, size_t len, off_t offset) { char devname[RTE_DEV_NAME_MAX_LEN] = ""; - const struct rte_intr_handle *intr_handle = &device->intr_handle; + const struct rte_intr_handle *intr_handle = device->intr_handle; switch (device->kdrv) { case RTE_PCI_KDRV_IGB_UIO: @@ -669,7 +669,7 @@ int rte_pci_write_config(const struct rte_pci_device *device, const void *buf, size_t len, off_t offset) { char devname[RTE_DEV_NAME_MAX_LEN] = ""; - const struct rte_intr_handle *intr_handle = &device->intr_handle; + const struct rte_intr_handle *intr_handle = device->intr_handle; switch (device->kdrv) { case RTE_PCI_KDRV_IGB_UIO: diff --git a/drivers/bus/pci/linux/pci_uio.c b/drivers/bus/pci/linux/pci_uio.c index 39ebeac2a0..2529377f9b 100644 --- a/drivers/bus/pci/linux/pci_uio.c +++ b/drivers/bus/pci/linux/pci_uio.c @@ -35,14 +35,18 @@ int pci_uio_read_config(const struct rte_intr_handle *intr_handle, void *buf, size_t len, off_t offset) { - return pread(intr_handle->uio_cfg_fd, buf, len, offset); + int uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle); + + return pread(uio_cfg_fd, buf, len, offset); } int pci_uio_write_config(const struct rte_intr_handle *intr_handle, const void *buf, size_t len, off_t offset) { - return pwrite(intr_handle->uio_cfg_fd, buf, len, offset); + int uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle); + + return pwrite(uio_cfg_fd, buf, len, offset); } static int @@ -198,16 +202,20 @@ void pci_uio_free_resource(struct rte_pci_device *dev, struct mapped_pci_resource *uio_res) { + int uio_cfg_fd = rte_intr_handle_dev_fd_get(dev->intr_handle); + rte_free(uio_res); - if (dev->intr_handle.uio_cfg_fd >= 0) { - close(dev->intr_handle.uio_cfg_fd); - dev->intr_handle.uio_cfg_fd = -1; + if (uio_cfg_fd >= 0) { + close(uio_cfg_fd); + rte_intr_handle_dev_fd_set(dev->intr_handle, -1); } - if (dev->intr_handle.fd >= 0) { - close(dev->intr_handle.fd); - dev->intr_handle.fd = -1; - dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN; + + if (rte_intr_handle_fd_get(dev->intr_handle) >= 0) { + close(rte_intr_handle_fd_get(dev->intr_handle)); + rte_intr_handle_fd_set(dev->intr_handle, -1); + rte_intr_handle_type_set(dev->intr_handle, + RTE_INTR_HANDLE_UNKNOWN); } } @@ -218,7 +226,7 @@ pci_uio_alloc_resource(struct rte_pci_device *dev, char dirname[PATH_MAX]; char cfgname[PATH_MAX]; char devname[PATH_MAX]; /* contains the /dev/uioX */ - int uio_num; + int uio_num, fd, uio_cfg_fd; struct rte_pci_addr *loc; loc = &dev->addr; @@ -233,29 +241,40 @@ pci_uio_alloc_resource(struct rte_pci_device *dev, snprintf(devname, sizeof(devname), "/dev/uio%u", uio_num); /* save fd if in primary process */ - dev->intr_handle.fd = open(devname, O_RDWR); - if (dev->intr_handle.fd < 0) { + fd = open(devname, O_RDWR); + if (fd < 0) { RTE_LOG(ERR, EAL, "Cannot open %s: %s\n", devname, strerror(errno)); goto error; } + if (rte_intr_handle_fd_set(dev->intr_handle, fd)) + goto error; + snprintf(cfgname, sizeof(cfgname), "/sys/class/uio/uio%u/device/config", uio_num); - dev->intr_handle.uio_cfg_fd = open(cfgname, O_RDWR); - if (dev->intr_handle.uio_cfg_fd < 0) { + + uio_cfg_fd = open(cfgname, O_RDWR); + if (uio_cfg_fd < 0) { RTE_LOG(ERR, EAL, "Cannot open %s: %s\n", cfgname, strerror(errno)); goto error; } - if (dev->kdrv == RTE_PCI_KDRV_IGB_UIO) - dev->intr_handle.type = RTE_INTR_HANDLE_UIO; - else { - dev->intr_handle.type = RTE_INTR_HANDLE_UIO_INTX; + if (rte_intr_handle_dev_fd_set(dev->intr_handle, uio_cfg_fd)) + goto error; + + if (dev->kdrv == RTE_PCI_KDRV_IGB_UIO) { + if (rte_intr_handle_type_set(dev->intr_handle, + RTE_INTR_HANDLE_UIO)) + goto error; + } else { + if (rte_intr_handle_type_set(dev->intr_handle, + RTE_INTR_HANDLE_UIO_INTX)) + goto error; /* set bus master that is not done by uio_pci_generic */ - if (pci_uio_set_bus_master(dev->intr_handle.uio_cfg_fd)) { + if (pci_uio_set_bus_master(uio_cfg_fd)) { RTE_LOG(ERR, EAL, "Cannot set up bus mastering!\n"); goto error; } @@ -381,7 +400,7 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar, char buf[BUFSIZ]; uint64_t phys_addr, end_addr, flags; unsigned long base; - int i; + int i, fd; /* open and read addresses of the corresponding resource in sysfs */ snprintf(filename, sizeof(filename), "%s/" PCI_PRI_FMT "/resource", @@ -427,7 +446,8 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar, } /* FIXME only for primary process ? */ - if (dev->intr_handle.type == RTE_INTR_HANDLE_UNKNOWN) { + if (rte_intr_handle_type_get(dev->intr_handle) == + RTE_INTR_HANDLE_UNKNOWN) { int uio_num = pci_get_uio_dev(dev, dirname, sizeof(dirname), 0); if (uio_num < 0) { RTE_LOG(ERR, EAL, "cannot open %s: %s\n", @@ -436,13 +456,18 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar, } snprintf(filename, sizeof(filename), "/dev/uio%u", uio_num); - dev->intr_handle.fd = open(filename, O_RDWR); - if (dev->intr_handle.fd < 0) { + fd = open(filename, O_RDWR); + if (fd < 0) { RTE_LOG(ERR, EAL, "Cannot open %s: %s\n", filename, strerror(errno)); goto error; } - dev->intr_handle.type = RTE_INTR_HANDLE_UIO; + if (rte_intr_handle_fd_set(dev->intr_handle, fd)) + goto error; + + if (rte_intr_handle_type_set(dev->intr_handle, + RTE_INTR_HANDLE_UIO)) + goto error; } RTE_LOG(DEBUG, EAL, "PCI Port IO found start=0x%lx\n", base); diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c index a024269140..f920163580 100644 --- a/drivers/bus/pci/linux/pci_vfio.c +++ b/drivers/bus/pci/linux/pci_vfio.c @@ -47,7 +47,9 @@ int pci_vfio_read_config(const struct rte_intr_handle *intr_handle, void *buf, size_t len, off_t offs) { - return pread64(intr_handle->vfio_dev_fd, buf, len, + int vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle); + + return pread64(vfio_dev_fd, buf, len, VFIO_GET_REGION_ADDR(VFIO_PCI_CONFIG_REGION_INDEX) + offs); } @@ -55,7 +57,9 @@ int pci_vfio_write_config(const struct rte_intr_handle *intr_handle, const void *buf, size_t len, off_t offs) { - return pwrite64(intr_handle->vfio_dev_fd, buf, len, + int vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle); + + return pwrite64(vfio_dev_fd, buf, len, VFIO_GET_REGION_ADDR(VFIO_PCI_CONFIG_REGION_INDEX) + offs); } @@ -281,21 +285,27 @@ pci_vfio_setup_interrupts(struct rte_pci_device *dev, int vfio_dev_fd) return -1; } - dev->intr_handle.fd = fd; - dev->intr_handle.vfio_dev_fd = vfio_dev_fd; + if (rte_intr_handle_fd_set(dev->intr_handle, fd)) + return -1; + + if (rte_intr_handle_dev_fd_set(dev->intr_handle, vfio_dev_fd)) + return -1; switch (i) { case VFIO_PCI_MSIX_IRQ_INDEX: intr_mode = RTE_INTR_MODE_MSIX; - dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_MSIX; + rte_intr_handle_type_set(dev->intr_handle, + RTE_INTR_HANDLE_VFIO_MSIX); break; case VFIO_PCI_MSI_IRQ_INDEX: intr_mode = RTE_INTR_MODE_MSI; - dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_MSI; + rte_intr_handle_type_set(dev->intr_handle, + RTE_INTR_HANDLE_VFIO_MSI); break; case VFIO_PCI_INTX_IRQ_INDEX: intr_mode = RTE_INTR_MODE_LEGACY; - dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_LEGACY; + rte_intr_handle_type_set(dev->intr_handle, + RTE_INTR_HANDLE_VFIO_LEGACY); break; default: RTE_LOG(ERR, EAL, "Unknown interrupt type!\n"); @@ -362,11 +372,18 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd) return -1; } - dev->vfio_req_intr_handle.fd = fd; - dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_VFIO_REQ; - dev->vfio_req_intr_handle.vfio_dev_fd = vfio_dev_fd; + if (rte_intr_handle_fd_set(dev->vfio_req_intr_handle, fd)) + return -1; + + if (rte_intr_handle_type_set(dev->vfio_req_intr_handle, + RTE_INTR_HANDLE_VFIO_REQ)) + return -1; + + if (rte_intr_handle_dev_fd_set(dev->vfio_req_intr_handle, vfio_dev_fd)) + return -1; + - ret = rte_intr_callback_register(&dev->vfio_req_intr_handle, + ret = rte_intr_callback_register(dev->vfio_req_intr_handle, pci_vfio_req_handler, (void *)&dev->device); if (ret) { @@ -374,10 +391,10 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd) goto error; } - ret = rte_intr_enable(&dev->vfio_req_intr_handle); + ret = rte_intr_enable(dev->vfio_req_intr_handle); if (ret) { RTE_LOG(ERR, EAL, "Fail to enable req notifier.\n"); - ret = rte_intr_callback_unregister(&dev->vfio_req_intr_handle, + ret = rte_intr_callback_unregister(dev->vfio_req_intr_handle, pci_vfio_req_handler, (void *)&dev->device); if (ret < 0) @@ -390,9 +407,10 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd) error: close(fd); - dev->vfio_req_intr_handle.fd = -1; - dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN; - dev->vfio_req_intr_handle.vfio_dev_fd = -1; + rte_intr_handle_fd_set(dev->vfio_req_intr_handle, -1); + rte_intr_handle_type_set(dev->vfio_req_intr_handle, + RTE_INTR_HANDLE_UNKNOWN); + rte_intr_handle_dev_fd_set(dev->vfio_req_intr_handle, -1); return -1; } @@ -403,13 +421,13 @@ pci_vfio_disable_notifier(struct rte_pci_device *dev) { int ret; - ret = rte_intr_disable(&dev->vfio_req_intr_handle); + ret = rte_intr_disable(dev->vfio_req_intr_handle); if (ret) { RTE_LOG(ERR, EAL, "fail to disable req notifier.\n"); return -1; } - ret = rte_intr_callback_unregister_sync(&dev->vfio_req_intr_handle, + ret = rte_intr_callback_unregister_sync(dev->vfio_req_intr_handle, pci_vfio_req_handler, (void *)&dev->device); if (ret < 0) { @@ -418,11 +436,12 @@ pci_vfio_disable_notifier(struct rte_pci_device *dev) return -1; } - close(dev->vfio_req_intr_handle.fd); + close(rte_intr_handle_fd_get(dev->vfio_req_intr_handle)); - dev->vfio_req_intr_handle.fd = -1; - dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN; - dev->vfio_req_intr_handle.vfio_dev_fd = -1; + rte_intr_handle_fd_set(dev->vfio_req_intr_handle, -1); + rte_intr_handle_type_set(dev->vfio_req_intr_handle, + RTE_INTR_HANDLE_UNKNOWN); + rte_intr_handle_dev_fd_set(dev->vfio_req_intr_handle, -1); return 0; } @@ -705,9 +724,13 @@ pci_vfio_map_resource_primary(struct rte_pci_device *dev) struct pci_map *maps; - dev->intr_handle.fd = -1; + if (rte_intr_handle_fd_set(dev->intr_handle, -1)) + return -1; + #ifdef HAVE_VFIO_DEV_REQ_INTERFACE - dev->vfio_req_intr_handle.fd = -1; + if (rte_intr_handle_fd_set(dev->vfio_req_intr_handle, -1)) + return -1; + #endif /* store PCI address string */ @@ -854,9 +877,12 @@ pci_vfio_map_resource_secondary(struct rte_pci_device *dev) struct pci_map *maps; - dev->intr_handle.fd = -1; + if (rte_intr_handle_fd_set(dev->intr_handle, -1)) + return -1; + #ifdef HAVE_VFIO_DEV_REQ_INTERFACE - dev->vfio_req_intr_handle.fd = -1; + if (rte_intr_handle_fd_set(dev->vfio_req_intr_handle, -1)) + return -1; #endif /* store PCI address string */ @@ -897,9 +923,11 @@ pci_vfio_map_resource_secondary(struct rte_pci_device *dev) } /* we need save vfio_dev_fd, so it can be used during release */ - dev->intr_handle.vfio_dev_fd = vfio_dev_fd; + if (rte_intr_handle_dev_fd_set(dev->intr_handle, vfio_dev_fd)) + goto err_vfio_dev_fd; #ifdef HAVE_VFIO_DEV_REQ_INTERFACE - dev->vfio_req_intr_handle.vfio_dev_fd = vfio_dev_fd; + if (rte_intr_handle_dev_fd_set(dev->vfio_req_intr_handle, vfio_dev_fd)) + goto err_vfio_dev_fd; #endif return 0; @@ -968,7 +996,7 @@ pci_vfio_unmap_resource_primary(struct rte_pci_device *dev) struct rte_pci_addr *loc = &dev->addr; struct mapped_pci_resource *vfio_res = NULL; struct mapped_pci_res_list *vfio_res_list; - int ret; + int ret, vfio_dev_fd; /* store PCI address string */ snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT, @@ -982,20 +1010,21 @@ pci_vfio_unmap_resource_primary(struct rte_pci_device *dev) } #endif - if (close(dev->intr_handle.fd) < 0) { + if (close(rte_intr_handle_fd_get(dev->intr_handle)) < 0) { RTE_LOG(INFO, EAL, "Error when closing eventfd file descriptor for %s\n", pci_addr); return -1; } - if (pci_vfio_set_bus_master(dev->intr_handle.vfio_dev_fd, false)) { + vfio_dev_fd = rte_intr_handle_dev_fd_get(dev->intr_handle); + if (pci_vfio_set_bus_master(vfio_dev_fd, false)) { RTE_LOG(ERR, EAL, "%s cannot unset bus mastering for PCI device!\n", pci_addr); return -1; } ret = rte_vfio_release_device(rte_pci_get_sysfs_path(), pci_addr, - dev->intr_handle.vfio_dev_fd); + vfio_dev_fd); if (ret < 0) { RTE_LOG(ERR, EAL, "Cannot release VFIO device\n"); return ret; @@ -1024,14 +1053,15 @@ pci_vfio_unmap_resource_secondary(struct rte_pci_device *dev) struct rte_pci_addr *loc = &dev->addr; struct mapped_pci_resource *vfio_res = NULL; struct mapped_pci_res_list *vfio_res_list; - int ret; + int ret, vfio_dev_fd; /* store PCI address string */ snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT, loc->domain, loc->bus, loc->devid, loc->function); + vfio_dev_fd = rte_intr_handle_dev_fd_get(dev->intr_handle); ret = rte_vfio_release_device(rte_pci_get_sysfs_path(), pci_addr, - dev->intr_handle.vfio_dev_fd); + vfio_dev_fd); if (ret < 0) { RTE_LOG(ERR, EAL, "Cannot release VFIO device\n"); return ret; @@ -1079,9 +1109,10 @@ void pci_vfio_ioport_read(struct rte_pci_ioport *p, void *data, size_t len, off_t offset) { - const struct rte_intr_handle *intr_handle = &p->dev->intr_handle; + const struct rte_intr_handle *intr_handle = p->dev->intr_handle; + int vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle); - if (pread64(intr_handle->vfio_dev_fd, data, + if (pread64(vfio_dev_fd, data, len, p->base + offset) <= 0) RTE_LOG(ERR, EAL, "Can't read from PCI bar (%" PRIu64 ") : offset (%x)\n", @@ -1092,9 +1123,10 @@ void pci_vfio_ioport_write(struct rte_pci_ioport *p, const void *data, size_t len, off_t offset) { - const struct rte_intr_handle *intr_handle = &p->dev->intr_handle; + const struct rte_intr_handle *intr_handle = p->dev->intr_handle; + int vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle); - if (pwrite64(intr_handle->vfio_dev_fd, data, + if (pwrite64(vfio_dev_fd, data, len, p->base + offset) <= 0) RTE_LOG(ERR, EAL, "Can't write to PCI bar (%" PRIu64 ") : offset (%x)\n", diff --git a/drivers/bus/pci/pci_common.c b/drivers/bus/pci/pci_common.c index 3406e03b29..b3feb4e40e 100644 --- a/drivers/bus/pci/pci_common.c +++ b/drivers/bus/pci/pci_common.c @@ -230,6 +230,24 @@ rte_pci_probe_one_driver(struct rte_pci_driver *dr, } if (!already_probed && (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING)) { + /* Allocate interrupt instance for pci device */ + dev->intr_handle = rte_intr_handle_instance_alloc( + RTE_INTR_HANDLE_DEFAULT_SIZE, false); + if (!dev->intr_handle) { + RTE_LOG(ERR, EAL, + "Failed to create interrupt instance for %s\n", + dev->device.name); + return -ENOMEM; + } + + dev->vfio_req_intr_handle = rte_intr_handle_instance_alloc( + RTE_INTR_HANDLE_DEFAULT_SIZE, false); + if (!dev->vfio_req_intr_handle) { + RTE_LOG(ERR, EAL, + "Failed to create vfio req interrupt instance for %s\n", + dev->device.name); + return -ENOMEM; + } /* map resources for devices that use igb_uio */ ret = rte_pci_map_device(dev); if (ret != 0) { @@ -253,8 +271,12 @@ rte_pci_probe_one_driver(struct rte_pci_driver *dr, * driver needs mapped resources. */ !(ret > 0 && - (dr->drv_flags & RTE_PCI_DRV_KEEP_MAPPED_RES))) + (dr->drv_flags & RTE_PCI_DRV_KEEP_MAPPED_RES))) { rte_pci_unmap_device(dev); + rte_intr_handle_instance_free(dev->intr_handle); + rte_intr_handle_instance_free( + dev->vfio_req_intr_handle); + } } else { dev->device.driver = &dr->driver; } @@ -296,9 +318,12 @@ rte_pci_detach_dev(struct rte_pci_device *dev) dev->driver = NULL; dev->device.driver = NULL; - if (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING) + if (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING) { /* unmap resources for devices that use igb_uio */ rte_pci_unmap_device(dev); + rte_intr_handle_instance_free(dev->intr_handle); + rte_intr_handle_instance_free(dev->vfio_req_intr_handle); + } return 0; } diff --git a/drivers/bus/pci/pci_common_uio.c b/drivers/bus/pci/pci_common_uio.c index 318f9a1d55..9b9a2e4a20 100644 --- a/drivers/bus/pci/pci_common_uio.c +++ b/drivers/bus/pci/pci_common_uio.c @@ -90,8 +90,11 @@ pci_uio_map_resource(struct rte_pci_device *dev) struct mapped_pci_res_list *uio_res_list = RTE_TAILQ_CAST(rte_uio_tailq.head, mapped_pci_res_list); - dev->intr_handle.fd = -1; - dev->intr_handle.uio_cfg_fd = -1; + if (rte_intr_handle_fd_set(dev->intr_handle, -1)) + return -1; + + if (rte_intr_handle_dev_fd_set(dev->intr_handle, -1)) + return -1; /* secondary processes - use already recorded details */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) @@ -207,6 +210,7 @@ pci_uio_unmap_resource(struct rte_pci_device *dev) struct mapped_pci_resource *uio_res; struct mapped_pci_res_list *uio_res_list = RTE_TAILQ_CAST(rte_uio_tailq.head, mapped_pci_res_list); + int uio_cfg_fd; if (dev == NULL) return; @@ -229,12 +233,13 @@ pci_uio_unmap_resource(struct rte_pci_device *dev) rte_free(uio_res); /* close fd if in primary process */ - close(dev->intr_handle.fd); - if (dev->intr_handle.uio_cfg_fd >= 0) { - close(dev->intr_handle.uio_cfg_fd); - dev->intr_handle.uio_cfg_fd = -1; + close(rte_intr_handle_fd_get(dev->intr_handle)); + uio_cfg_fd = rte_intr_handle_dev_fd_get(dev->intr_handle); + if (uio_cfg_fd >= 0) { + close(uio_cfg_fd); + rte_intr_handle_dev_fd_set(dev->intr_handle, -1); } - dev->intr_handle.fd = -1; - dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN; + rte_intr_handle_fd_set(dev->intr_handle, -1); + rte_intr_handle_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN); } diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h index 583470e831..fe679c467c 100644 --- a/drivers/bus/pci/rte_bus_pci.h +++ b/drivers/bus/pci/rte_bus_pci.h @@ -70,12 +70,12 @@ struct rte_pci_device { struct rte_pci_id id; /**< PCI ID. */ struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE]; /**< PCI Memory Resource */ - struct rte_intr_handle intr_handle; /**< Interrupt handle */ + struct rte_intr_handle *intr_handle; /**< Interrupt handle */ struct rte_pci_driver *driver; /**< PCI driver used in probing */ uint16_t max_vfs; /**< sriov enable if not zero */ enum rte_pci_kernel_driver kdrv; /**< Kernel driver passthrough */ char name[PCI_PRI_STR_SIZE+1]; /**< PCI location (ASCII) */ - struct rte_intr_handle vfio_req_intr_handle; + struct rte_intr_handle *vfio_req_intr_handle; /**< Handler of VFIO request interrupt */ }; diff --git a/drivers/bus/vmbus/linux/vmbus_bus.c b/drivers/bus/vmbus/linux/vmbus_bus.c index 3c924eee14..bce94d5d72 100644 --- a/drivers/bus/vmbus/linux/vmbus_bus.c +++ b/drivers/bus/vmbus/linux/vmbus_bus.c @@ -297,6 +297,13 @@ vmbus_scan_one(const char *name) dev->device.devargs = vmbus_devargs_lookup(dev); + /* Allocate interrupt handle instance */ + dev->intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + false); + if (!dev->intr_handle) + goto error; + /* device is valid, add in list (sorted) */ VMBUS_LOG(DEBUG, "Adding vmbus device %s", name); diff --git a/drivers/bus/vmbus/linux/vmbus_uio.c b/drivers/bus/vmbus/linux/vmbus_uio.c index b52ca5bf1d..f506811d98 100644 --- a/drivers/bus/vmbus/linux/vmbus_uio.c +++ b/drivers/bus/vmbus/linux/vmbus_uio.c @@ -29,9 +29,11 @@ static void *vmbus_map_addr; /* Control interrupts */ void vmbus_uio_irq_control(struct rte_vmbus_device *dev, int32_t onoff) { - if (write(dev->intr_handle.fd, &onoff, sizeof(onoff)) < 0) { + if (write(rte_intr_handle_fd_get(dev->intr_handle), &onoff, + sizeof(onoff)) < 0) { VMBUS_LOG(ERR, "cannot write to %d:%s", - dev->intr_handle.fd, strerror(errno)); + rte_intr_handle_fd_get(dev->intr_handle), + strerror(errno)); } } @@ -40,7 +42,8 @@ int vmbus_uio_irq_read(struct rte_vmbus_device *dev) int32_t count; int cc; - cc = read(dev->intr_handle.fd, &count, sizeof(count)); + cc = read(rte_intr_handle_fd_get(dev->intr_handle), &count, + sizeof(count)); if (cc < (int)sizeof(count)) { if (cc < 0) { VMBUS_LOG(ERR, "IRQ read failed %s", @@ -60,15 +63,16 @@ vmbus_uio_free_resource(struct rte_vmbus_device *dev, { rte_free(uio_res); - if (dev->intr_handle.uio_cfg_fd >= 0) { - close(dev->intr_handle.uio_cfg_fd); - dev->intr_handle.uio_cfg_fd = -1; + if (rte_intr_handle_dev_fd_get(dev->intr_handle) >= 0) { + close(rte_intr_handle_dev_fd_get(dev->intr_handle)); + rte_intr_handle_dev_fd_set(dev->intr_handle, -1); } - if (dev->intr_handle.fd >= 0) { - close(dev->intr_handle.fd); - dev->intr_handle.fd = -1; - dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN; + if (rte_intr_handle_fd_get(dev->intr_handle) >= 0) { + close(rte_intr_handle_fd_get(dev->intr_handle)); + rte_intr_handle_fd_set(dev->intr_handle, -1); + rte_intr_handle_type_set(dev->intr_handle, + RTE_INTR_HANDLE_UNKNOWN); } } @@ -77,16 +81,23 @@ vmbus_uio_alloc_resource(struct rte_vmbus_device *dev, struct mapped_vmbus_resource **uio_res) { char devname[PATH_MAX]; /* contains the /dev/uioX */ + int fd; /* save fd if in primary process */ snprintf(devname, sizeof(devname), "/dev/uio%u", dev->uio_num); - dev->intr_handle.fd = open(devname, O_RDWR); - if (dev->intr_handle.fd < 0) { + fd = open(devname, O_RDWR); + if (fd < 0) { VMBUS_LOG(ERR, "Cannot open %s: %s", devname, strerror(errno)); goto error; } - dev->intr_handle.type = RTE_INTR_HANDLE_UIO_INTX; + + if (rte_intr_handle_fd_set(dev->intr_handle, fd)) + goto error; + + if (rte_intr_handle_type_set(dev->intr_handle, + RTE_INTR_HANDLE_UIO_INTX)) + goto error; /* allocate the mapping details for secondary processes*/ *uio_res = rte_zmalloc("UIO_RES", sizeof(**uio_res), 0); diff --git a/drivers/bus/vmbus/rte_bus_vmbus.h b/drivers/bus/vmbus/rte_bus_vmbus.h index 4cf73ce815..07916478ef 100644 --- a/drivers/bus/vmbus/rte_bus_vmbus.h +++ b/drivers/bus/vmbus/rte_bus_vmbus.h @@ -74,7 +74,7 @@ struct rte_vmbus_device { struct vmbus_channel *primary; /**< VMBUS primary channel */ struct vmbus_mon_page *monitor_page; /**< VMBUS monitor page */ - struct rte_intr_handle intr_handle; /**< Interrupt handle */ + struct rte_intr_handle *intr_handle; /**< Interrupt handle */ struct rte_mem_resource resource[VMBUS_MAX_RESOURCE]; }; diff --git a/drivers/bus/vmbus/vmbus_common_uio.c b/drivers/bus/vmbus/vmbus_common_uio.c index 8582e32c1d..fb0f051f81 100644 --- a/drivers/bus/vmbus/vmbus_common_uio.c +++ b/drivers/bus/vmbus/vmbus_common_uio.c @@ -149,9 +149,15 @@ vmbus_uio_map_resource(struct rte_vmbus_device *dev) int ret; /* TODO: handle rescind */ - dev->intr_handle.fd = -1; - dev->intr_handle.uio_cfg_fd = -1; - dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN; + if (rte_intr_handle_fd_set(dev->intr_handle, -1)) + return -1; + + if (rte_intr_handle_dev_fd_set(dev->intr_handle, -1)) + return -1; + + if (rte_intr_handle_type_set(dev->intr_handle, + RTE_INTR_HANDLE_UNKNOWN)) + return -1; /* secondary processes - use already recorded details */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) @@ -223,12 +229,12 @@ vmbus_uio_unmap_resource(struct rte_vmbus_device *dev) rte_free(uio_res); /* close fd if in primary process */ - close(dev->intr_handle.fd); - if (dev->intr_handle.uio_cfg_fd >= 0) { - close(dev->intr_handle.uio_cfg_fd); - dev->intr_handle.uio_cfg_fd = -1; + close(rte_intr_handle_fd_get(dev->intr_handle)); + if (rte_intr_handle_dev_fd_get(dev->intr_handle) >= 0) { + close(rte_intr_handle_dev_fd_get(dev->intr_handle)); + rte_intr_handle_dev_fd_set(dev->intr_handle, -1); } - dev->intr_handle.fd = -1; - dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN; + rte_intr_handle_fd_set(dev->intr_handle, -1); + rte_intr_handle_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN); } diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c index c001497f74..b0d16bf81c 100644 --- a/drivers/common/cnxk/roc_cpt.c +++ b/drivers/common/cnxk/roc_cpt.c @@ -62,7 +62,7 @@ cpt_lf_register_misc_irq(struct roc_cpt_lf *lf) struct plt_intr_handle *handle; int rc, vec; - handle = &pci_dev->intr_handle; + handle = pci_dev->intr_handle; vec = lf->msixoff + CPT_LF_INT_VEC_MISC; /* Clear err interrupt */ @@ -82,7 +82,7 @@ cpt_lf_unregister_misc_irq(struct roc_cpt_lf *lf) struct plt_intr_handle *handle; int vec; - handle = &pci_dev->intr_handle; + handle = pci_dev->intr_handle; vec = lf->msixoff + CPT_LF_INT_VEC_MISC; /* Clear err interrupt */ @@ -126,7 +126,7 @@ cpt_lf_register_done_irq(struct roc_cpt_lf *lf) struct plt_intr_handle *handle; int rc, vec; - handle = &pci_dev->intr_handle; + handle = pci_dev->intr_handle; vec = lf->msixoff + CPT_LF_INT_VEC_DONE; @@ -149,7 +149,7 @@ cpt_lf_unregister_done_irq(struct roc_cpt_lf *lf) struct plt_intr_handle *handle; int vec; - handle = &pci_dev->intr_handle; + handle = pci_dev->intr_handle; vec = lf->msixoff + CPT_LF_INT_VEC_DONE; diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c index c14f189f9b..2dce7936fe 100644 --- a/drivers/common/cnxk/roc_dev.c +++ b/drivers/common/cnxk/roc_dev.c @@ -608,7 +608,7 @@ roc_af_pf_mbox_irq(void *param) static int mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev) { - struct plt_intr_handle *intr_handle = &pci_dev->intr_handle; + struct plt_intr_handle *intr_handle = pci_dev->intr_handle; int i, rc; /* HW clear irq */ @@ -658,7 +658,7 @@ mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev) static int mbox_register_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev) { - struct plt_intr_handle *intr_handle = &pci_dev->intr_handle; + struct plt_intr_handle *intr_handle = pci_dev->intr_handle; int rc; /* Clear irq */ @@ -691,7 +691,7 @@ mbox_register_irq(struct plt_pci_device *pci_dev, struct dev *dev) static void mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev) { - struct plt_intr_handle *intr_handle = &pci_dev->intr_handle; + struct plt_intr_handle *intr_handle = pci_dev->intr_handle; int i; /* HW clear irq */ @@ -722,7 +722,7 @@ mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev) static void mbox_unregister_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev) { - struct plt_intr_handle *intr_handle = &pci_dev->intr_handle; + struct plt_intr_handle *intr_handle = pci_dev->intr_handle; /* Clear irq */ plt_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C); @@ -806,7 +806,7 @@ roc_pf_vf_flr_irq(void *param) static int vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev) { - struct plt_intr_handle *intr_handle = &pci_dev->intr_handle; + struct plt_intr_handle *intr_handle = pci_dev->intr_handle; int i; plt_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name); @@ -827,7 +827,7 @@ vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev) static int vf_flr_register_irqs(struct plt_pci_device *pci_dev, struct dev *dev) { - struct plt_intr_handle *handle = &pci_dev->intr_handle; + struct plt_intr_handle *handle = pci_dev->intr_handle; int i, rc; plt_base_dbg("Register VF FLR interrupts for %s", pci_dev->name); @@ -1143,7 +1143,7 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev) int dev_fini(struct dev *dev, struct plt_pci_device *pci_dev) { - struct plt_intr_handle *intr_handle = &pci_dev->intr_handle; + struct plt_intr_handle *intr_handle = pci_dev->intr_handle; struct mbox *mbox; /* Check if this dev hosts npalf and has 1+ refs */ diff --git a/drivers/common/cnxk/roc_irq.c b/drivers/common/cnxk/roc_irq.c index 4c2b4c30d7..40c472e7d3 100644 --- a/drivers/common/cnxk/roc_irq.c +++ b/drivers/common/cnxk/roc_irq.c @@ -20,11 +20,12 @@ static int irq_get_info(struct plt_intr_handle *intr_handle) { struct vfio_irq_info irq = {.argsz = sizeof(irq)}; - int rc; + int rc, vfio_dev_fd; irq.index = VFIO_PCI_MSIX_IRQ_INDEX; - rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq); + vfio_dev_fd = plt_intr_handle_dev_fd_get(intr_handle); + rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq); if (rc < 0) { plt_err("Failed to get IRQ info rc=%d errno=%d", rc, errno); return rc; @@ -36,9 +37,11 @@ irq_get_info(struct plt_intr_handle *intr_handle) if (irq.count > PLT_MAX_RXTX_INTR_VEC_ID) { plt_err("HW max=%d > PLT_MAX_RXTX_INTR_VEC_ID: %d", irq.count, PLT_MAX_RXTX_INTR_VEC_ID); - intr_handle->max_intr = PLT_MAX_RXTX_INTR_VEC_ID; + plt_intr_handle_max_intr_set(intr_handle, + PLT_MAX_RXTX_INTR_VEC_ID); } else { - intr_handle->max_intr = irq.count; + if (plt_intr_handle_max_intr_set(intr_handle, irq.count)) + return -1; } return 0; @@ -49,12 +52,12 @@ irq_config(struct plt_intr_handle *intr_handle, unsigned int vec) { char irq_set_buf[MSIX_IRQ_SET_BUF_LEN]; struct vfio_irq_set *irq_set; + int len, rc, vfio_dev_fd; int32_t *fd_ptr; - int len, rc; - if (vec > intr_handle->max_intr) { + if (vec > (uint32_t)plt_intr_handle_max_intr_get(intr_handle)) { plt_err("vector=%d greater than max_intr=%d", vec, - intr_handle->max_intr); + plt_intr_handle_max_intr_get(intr_handle)); return -EINVAL; } @@ -71,9 +74,10 @@ irq_config(struct plt_intr_handle *intr_handle, unsigned int vec) /* Use vec fd to set interrupt vectors */ fd_ptr = (int32_t *)&irq_set->data[0]; - fd_ptr[0] = intr_handle->efds[vec]; + fd_ptr[0] = plt_intr_handle_efds_index_get(intr_handle, vec); - rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + vfio_dev_fd = plt_intr_handle_dev_fd_get(intr_handle); + rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (rc) plt_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc); @@ -85,23 +89,25 @@ irq_init(struct plt_intr_handle *intr_handle) { char irq_set_buf[MSIX_IRQ_SET_BUF_LEN]; struct vfio_irq_set *irq_set; + int len, rc, vfio_dev_fd; int32_t *fd_ptr; - int len, rc; uint32_t i; - if (intr_handle->max_intr > PLT_MAX_RXTX_INTR_VEC_ID) { + if (plt_intr_handle_max_intr_get(intr_handle) > + PLT_MAX_RXTX_INTR_VEC_ID) { plt_err("Max_intr=%d greater than PLT_MAX_RXTX_INTR_VEC_ID=%d", - intr_handle->max_intr, PLT_MAX_RXTX_INTR_VEC_ID); + plt_intr_handle_max_intr_get(intr_handle), + PLT_MAX_RXTX_INTR_VEC_ID); return -ERANGE; } len = sizeof(struct vfio_irq_set) + - sizeof(int32_t) * intr_handle->max_intr; + sizeof(int32_t) * plt_intr_handle_max_intr_get(intr_handle); irq_set = (struct vfio_irq_set *)irq_set_buf; irq_set->argsz = len; irq_set->start = 0; - irq_set->count = intr_handle->max_intr; + irq_set->count = plt_intr_handle_max_intr_get(intr_handle); irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER; irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX; @@ -110,7 +116,8 @@ irq_init(struct plt_intr_handle *intr_handle) for (i = 0; i < irq_set->count; i++) fd_ptr[i] = -1; - rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + vfio_dev_fd = plt_intr_handle_dev_fd_get(intr_handle); + rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (rc) plt_err("Failed to set irqs vector rc=%d", rc); @@ -121,7 +128,7 @@ int dev_irqs_disable(struct plt_intr_handle *intr_handle) { /* Clear max_intr to indicate re-init next time */ - intr_handle->max_intr = 0; + plt_intr_handle_max_intr_set(intr_handle, 0); return plt_intr_disable(intr_handle); } @@ -129,42 +136,50 @@ int dev_irq_register(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb, void *data, unsigned int vec) { - struct plt_intr_handle tmp_handle; - int rc; + struct plt_intr_handle *tmp_handle; + uint32_t nb_efd, tmp_nb_efd; + int rc, fd; /* If no max_intr read from VFIO */ - if (intr_handle->max_intr == 0) { + if (plt_intr_handle_max_intr_get(intr_handle) == 0) { irq_get_info(intr_handle); irq_init(intr_handle); } - if (vec > intr_handle->max_intr) { + if (vec > (uint32_t)plt_intr_handle_max_intr_get(intr_handle)) { plt_err("Vector=%d greater than max_intr=%d", vec, - intr_handle->max_intr); + plt_intr_handle_max_intr_get(intr_handle)); return -EINVAL; } - tmp_handle = *intr_handle; + tmp_handle = intr_handle; /* Create new eventfd for interrupt vector */ - tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); - if (tmp_handle.fd == -1) + fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); + if (fd == -1) return -ENODEV; + if (plt_intr_handle_fd_set(tmp_handle, fd)) + return errno; + /* Register vector interrupt callback */ - rc = plt_intr_callback_register(&tmp_handle, cb, data); + rc = plt_intr_callback_register(tmp_handle, cb, data); if (rc) { plt_err("Failed to register vector:0x%x irq callback.", vec); return rc; } - intr_handle->efds[vec] = tmp_handle.fd; - intr_handle->nb_efd = - (vec > intr_handle->nb_efd) ? vec : intr_handle->nb_efd; - if ((intr_handle->nb_efd + 1) > intr_handle->max_intr) - intr_handle->max_intr = intr_handle->nb_efd + 1; + plt_intr_handle_efds_index_set(intr_handle, vec, fd); + nb_efd = (vec > (uint32_t)plt_intr_handle_nb_efd_get(intr_handle)) ? + vec : (uint32_t)plt_intr_handle_nb_efd_get(intr_handle); + plt_intr_handle_nb_efd_set(intr_handle, nb_efd); + + tmp_nb_efd = plt_intr_handle_nb_efd_get(intr_handle) + 1; + if (tmp_nb_efd > (uint32_t)plt_intr_handle_max_intr_get(intr_handle)) + plt_intr_handle_max_intr_set(intr_handle, tmp_nb_efd); plt_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec, - intr_handle->nb_efd, intr_handle->max_intr); + plt_intr_handle_nb_efd_get(intr_handle), + plt_intr_handle_max_intr_get(intr_handle)); /* Enable MSIX vectors to VFIO */ return irq_config(intr_handle, vec); @@ -174,24 +189,27 @@ void dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb, void *data, unsigned int vec) { - struct plt_intr_handle tmp_handle; + struct plt_intr_handle *tmp_handle; uint8_t retries = 5; /* 5 ms */ - int rc; + int rc, fd; - if (vec > intr_handle->max_intr) { + if (vec > (uint32_t)plt_intr_handle_max_intr_get(intr_handle)) { plt_err("Error unregistering MSI-X interrupts vec:%d > %d", vec, - intr_handle->max_intr); + plt_intr_handle_max_intr_get(intr_handle)); return; } - tmp_handle = *intr_handle; - tmp_handle.fd = intr_handle->efds[vec]; - if (tmp_handle.fd == -1) + tmp_handle = intr_handle; + fd = plt_intr_handle_efds_index_get(intr_handle, vec); + if (fd == -1) + return; + + if (plt_intr_handle_fd_set(tmp_handle, fd)) return; do { /* Un-register callback func from platform lib */ - rc = plt_intr_callback_unregister(&tmp_handle, cb, data); + rc = plt_intr_callback_unregister(tmp_handle, cb, data); /* Retry only if -EAGAIN */ if (rc != -EAGAIN) break; @@ -205,12 +223,14 @@ dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb, } plt_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec, - intr_handle->nb_efd, intr_handle->max_intr); + plt_intr_handle_nb_efd_get(intr_handle), + plt_intr_handle_max_intr_get(intr_handle)); - if (intr_handle->efds[vec] != -1) - close(intr_handle->efds[vec]); + if (plt_intr_handle_efds_index_get(intr_handle, vec) != -1) + close(plt_intr_handle_efds_index_get(intr_handle, vec)); /* Disable MSIX vectors from VFIO */ - intr_handle->efds[vec] = -1; + plt_intr_handle_efds_index_set(intr_handle, vec, -1); + irq_config(intr_handle, vec); } diff --git a/drivers/common/cnxk/roc_nix_irq.c b/drivers/common/cnxk/roc_nix_irq.c index 32be64a9d7..9c29f4272b 100644 --- a/drivers/common/cnxk/roc_nix_irq.c +++ b/drivers/common/cnxk/roc_nix_irq.c @@ -82,7 +82,7 @@ nix_lf_err_irq(void *param) static int nix_lf_register_err_irq(struct nix *nix) { - struct plt_intr_handle *handle = &nix->pci_dev->intr_handle; + struct plt_intr_handle *handle = nix->pci_dev->intr_handle; int rc, vec; vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT; @@ -99,7 +99,7 @@ nix_lf_register_err_irq(struct nix *nix) static void nix_lf_unregister_err_irq(struct nix *nix) { - struct plt_intr_handle *handle = &nix->pci_dev->intr_handle; + struct plt_intr_handle *handle = nix->pci_dev->intr_handle; int vec; vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT; @@ -131,7 +131,7 @@ nix_lf_ras_irq(void *param) static int nix_lf_register_ras_irq(struct nix *nix) { - struct plt_intr_handle *handle = &nix->pci_dev->intr_handle; + struct plt_intr_handle *handle = nix->pci_dev->intr_handle; int rc, vec; vec = nix->msixoff + NIX_LF_INT_VEC_POISON; @@ -148,7 +148,7 @@ nix_lf_register_ras_irq(struct nix *nix) static void nix_lf_unregister_ras_irq(struct nix *nix) { - struct plt_intr_handle *handle = &nix->pci_dev->intr_handle; + struct plt_intr_handle *handle = nix->pci_dev->intr_handle; int vec; vec = nix->msixoff + NIX_LF_INT_VEC_POISON; @@ -300,7 +300,7 @@ roc_nix_register_queue_irqs(struct roc_nix *roc_nix) struct nix *nix; nix = roc_nix_to_nix_priv(roc_nix); - handle = &nix->pci_dev->intr_handle; + handle = nix->pci_dev->intr_handle; /* Figure out max qintx required */ rqs = PLT_MIN(nix->qints, nix->nb_rx_queues); @@ -352,7 +352,7 @@ roc_nix_unregister_queue_irqs(struct roc_nix *roc_nix) int vec, q; nix = roc_nix_to_nix_priv(roc_nix); - handle = &nix->pci_dev->intr_handle; + handle = nix->pci_dev->intr_handle; for (q = 0; q < nix->configured_qints; q++) { vec = nix->msixoff + NIX_LF_INT_VEC_QINT_START + q; @@ -382,7 +382,7 @@ roc_nix_register_cq_irqs(struct roc_nix *roc_nix) struct nix *nix; nix = roc_nix_to_nix_priv(roc_nix); - handle = &nix->pci_dev->intr_handle; + handle = nix->pci_dev->intr_handle; nix->configured_cints = PLT_MIN(nix->cints, nix->nb_rx_queues); @@ -414,19 +414,21 @@ roc_nix_register_cq_irqs(struct roc_nix *roc_nix) return rc; } - if (!handle->intr_vec) { - handle->intr_vec = plt_zmalloc( - nix->configured_cints * sizeof(int), 0); - if (!handle->intr_vec) { - plt_err("Failed to allocate %d rx intr_vec", - nix->configured_cints); - return -ENOMEM; + if (!plt_intr_handle_vec_list_base(handle)) { + rc = plt_intr_handle_vec_list_alloc(handle, "cnxk", + nix->configured_cints); + if (rc) { + plt_err("Fail to allocate intr vec list, rc=%d", + rc); + return rc; } } /* VFIO vector zero is resereved for misc interrupt so * doing required adjustment. (b13bfab4cd) */ - handle->intr_vec[q] = PLT_INTR_VEC_RXTX_OFFSET + vec; + if (plt_intr_handle_vec_list_index_set(handle, q, + PLT_INTR_VEC_RXTX_OFFSET + vec)) + return -1; /* Configure CQE interrupt coalescing parameters */ plt_write64(((CQ_CQE_THRESH_DEFAULT) | @@ -450,7 +452,7 @@ roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix) int vec, q; nix = roc_nix_to_nix_priv(roc_nix); - handle = &nix->pci_dev->intr_handle; + handle = nix->pci_dev->intr_handle; for (q = 0; q < nix->configured_cints; q++) { vec = nix->msixoff + NIX_LF_INT_VEC_CINT_START + q; @@ -465,6 +467,9 @@ roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix) dev_irq_unregister(handle, nix_lf_cq_irq, &nix->cints_mem[q], vec); } + + if (plt_intr_handle_vec_list_base(handle)) + plt_intr_handle_vec_list_free(handle); plt_free(nix->cints_mem); } diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c index d064d125c1..69b6254870 100644 --- a/drivers/common/cnxk/roc_npa.c +++ b/drivers/common/cnxk/roc_npa.c @@ -710,7 +710,7 @@ npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev) lf->pf_func = dev->pf_func; lf->npa_msixoff = npa_msixoff; - lf->intr_handle = &pci_dev->intr_handle; + lf->intr_handle = pci_dev->intr_handle; lf->pci_dev = pci_dev; idev->npa_pf_func = dev->pf_func; diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h index 285b24b82d..872af26acc 100644 --- a/drivers/common/cnxk/roc_platform.h +++ b/drivers/common/cnxk/roc_platform.h @@ -101,6 +101,40 @@ #define plt_thread_is_intr rte_thread_is_intr #define plt_intr_callback_fn rte_intr_callback_fn +#define plt_intr_handle_efd_counter_size_get \ + rte_intr_handle_efd_counter_size_get +#define plt_intr_handle_efd_counter_size_set \ + rte_intr_handle_efd_counter_size_set +#define plt_intr_handle_vec_list_index_get rte_intr_handle_vec_list_index_get +#define plt_intr_handle_vec_list_index_set rte_intr_handle_vec_list_index_set +#define plt_intr_handle_vec_list_base rte_intr_handle_vec_list_base +#define plt_intr_handle_vec_list_alloc rte_intr_handle_vec_list_alloc +#define plt_intr_handle_vec_list_free rte_intr_handle_vec_list_free +#define plt_intr_handle_vec_list_base rte_intr_handle_vec_list_base +#define plt_intr_handle_vec_list_base rte_intr_handle_vec_list_base +#define plt_intr_handle_fd_set rte_intr_handle_fd_set +#define plt_intr_handle_fd_get rte_intr_handle_fd_get +#define plt_intr_handle_dev_fd_get rte_intr_handle_dev_fd_get +#define plt_intr_handle_dev_fd_set rte_intr_handle_dev_fd_set +#define plt_intr_handle_type_get rte_intr_handle_type_get +#define plt_intr_handle_type_set rte_intr_handle_type_set +#define plt_intr_handle_instance_alloc rte_intr_handle_instance_alloc +#define plt_intr_handle_instance_index_get rte_intr_handle_instance_index_get +#define plt_intr_handle_instance_index_set rte_intr_handle_instance_index_set +#define plt_intr_handle_instance_free rte_intr_handle_instance_free +#define plt_intr_handle_event_list_update rte_intr_handle_event_list_update +#define plt_intr_handle_max_intr_get rte_intr_handle_max_intr_get +#define plt_intr_handle_max_intr_set rte_intr_handle_max_intr_set +#define plt_intr_handle_nb_efd_get rte_intr_handle_nb_efd_get +#define plt_intr_handle_nb_efd_set rte_intr_handle_nb_efd_set +#define plt_intr_handle_nb_intr_get rte_intr_handle_nb_intr_get +#define plt_intr_handle_nb_intr_set rte_intr_handle_nb_intr_set +#define plt_intr_handle_efds_index_get rte_intr_handle_efds_index_get +#define plt_intr_handle_efds_index_set rte_intr_handle_efds_index_set +#define plt_intr_handle_efds_base rte_intr_handle_efds_base +#define plt_intr_handle_elist_index_get rte_intr_handle_elist_index_get +#define plt_intr_handle_elist_index_set rte_intr_handle_elist_index_set + #define plt_alarm_set rte_eal_alarm_set #define plt_alarm_cancel rte_eal_alarm_cancel diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c index 1ccf2626bd..88165ad236 100644 --- a/drivers/common/cnxk/roc_sso.c +++ b/drivers/common/cnxk/roc_sso.c @@ -491,7 +491,7 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp) goto sso_msix_fail; } - rc = sso_register_irqs_priv(roc_sso, &sso->pci_dev->intr_handle, nb_hws, + rc = sso_register_irqs_priv(roc_sso, sso->pci_dev->intr_handle, nb_hws, nb_hwgrp); if (rc < 0) { plt_err("Failed to register SSO LF IRQs"); @@ -521,7 +521,7 @@ roc_sso_rsrc_fini(struct roc_sso *roc_sso) if (!roc_sso->nb_hws && !roc_sso->nb_hwgrp) return; - sso_unregister_irqs_priv(roc_sso, &sso->pci_dev->intr_handle, + sso_unregister_irqs_priv(roc_sso, sso->pci_dev->intr_handle, roc_sso->nb_hws, roc_sso->nb_hwgrp); sso_lf_free(roc_sso, SSO_LF_TYPE_HWS, roc_sso->nb_hws); sso_lf_free(roc_sso, SSO_LF_TYPE_HWGRP, roc_sso->nb_hwgrp); diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c index 387164bb1d..534b697bee 100644 --- a/drivers/common/cnxk/roc_tim.c +++ b/drivers/common/cnxk/roc_tim.c @@ -200,7 +200,7 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk) if (clk) *clk = rsp->tenns_clk; - rc = tim_register_irq_priv(roc_tim, &sso->pci_dev->intr_handle, ring_id, + rc = tim_register_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id, tim->tim_msix_offsets[ring_id]); if (rc < 0) { plt_tim_dbg("Failed to register Ring[%d] IRQ", ring_id); @@ -223,7 +223,7 @@ roc_tim_lf_free(struct roc_tim *roc_tim, uint8_t ring_id) struct tim_ring_req *req; int rc = -ENOSPC; - tim_unregister_irq_priv(roc_tim, &sso->pci_dev->intr_handle, ring_id, + tim_unregister_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id, tim->tim_msix_offsets[ring_id]); req = mbox_alloc_msg_tim_lf_free(dev->mbox); diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c index 1485e2b357..906b283cde 100644 --- a/drivers/common/octeontx2/otx2_dev.c +++ b/drivers/common/octeontx2/otx2_dev.c @@ -640,7 +640,7 @@ otx2_af_pf_mbox_irq(void *param) static int mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) { - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int i, rc; /* HW clear irq */ @@ -690,7 +690,7 @@ mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) static int mbox_register_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) { - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int rc; /* Clear irq */ @@ -723,7 +723,7 @@ mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) static void mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) { - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int i; /* HW clear irq */ @@ -755,7 +755,7 @@ mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) static void mbox_unregister_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) { - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; /* Clear irq */ otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C); @@ -838,7 +838,7 @@ otx2_pf_vf_flr_irq(void *param) static int vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev) { - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int i; otx2_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name); @@ -859,7 +859,7 @@ vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev) static int vf_flr_register_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev) { - struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct rte_intr_handle *handle = pci_dev->intr_handle; int i, rc; otx2_base_dbg("Register VF FLR interrupts for %s", pci_dev->name); @@ -1036,7 +1036,7 @@ otx2_dev_priv_init(struct rte_pci_device *pci_dev, void *otx2_dev) void otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev) { - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct otx2_dev *dev = otx2_dev; struct otx2_idev_cfg *idev; struct otx2_mbox *mbox; diff --git a/drivers/common/octeontx2/otx2_irq.c b/drivers/common/octeontx2/otx2_irq.c index c0137ff36d..6efa4c6646 100644 --- a/drivers/common/octeontx2/otx2_irq.c +++ b/drivers/common/octeontx2/otx2_irq.c @@ -26,11 +26,12 @@ static int irq_get_info(struct rte_intr_handle *intr_handle) { struct vfio_irq_info irq = { .argsz = sizeof(irq) }; - int rc; + int rc, vfio_dev_fd; irq.index = VFIO_PCI_MSIX_IRQ_INDEX; - rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq); + vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle); + rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq); if (rc < 0) { otx2_err("Failed to get IRQ info rc=%d errno=%d", rc, errno); return rc; @@ -41,10 +42,13 @@ irq_get_info(struct rte_intr_handle *intr_handle) if (irq.count > MAX_INTR_VEC_ID) { otx2_err("HW max=%d > MAX_INTR_VEC_ID: %d", - intr_handle->max_intr, MAX_INTR_VEC_ID); - intr_handle->max_intr = MAX_INTR_VEC_ID; + rte_intr_handle_max_intr_get(intr_handle), + MAX_INTR_VEC_ID); + if (rte_intr_handle_max_intr_set(intr_handle, MAX_INTR_VEC_ID)) + return -1; } else { - intr_handle->max_intr = irq.count; + if (rte_intr_handle_max_intr_set(intr_handle, irq.count)) + return -1; } return 0; @@ -55,12 +59,12 @@ irq_config(struct rte_intr_handle *intr_handle, unsigned int vec) { char irq_set_buf[MSIX_IRQ_SET_BUF_LEN]; struct vfio_irq_set *irq_set; + int len, rc, vfio_dev_fd; int32_t *fd_ptr; - int len, rc; - if (vec > intr_handle->max_intr) { + if (vec > (uint32_t)rte_intr_handle_max_intr_get(intr_handle)) { otx2_err("vector=%d greater than max_intr=%d", vec, - intr_handle->max_intr); + rte_intr_handle_max_intr_get(intr_handle)); return -EINVAL; } @@ -77,9 +81,10 @@ irq_config(struct rte_intr_handle *intr_handle, unsigned int vec) /* Use vec fd to set interrupt vectors */ fd_ptr = (int32_t *)&irq_set->data[0]; - fd_ptr[0] = intr_handle->efds[vec]; + fd_ptr[0] = rte_intr_handle_efds_index_get(intr_handle, vec); - rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle); + rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (rc) otx2_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc); @@ -91,23 +96,24 @@ irq_init(struct rte_intr_handle *intr_handle) { char irq_set_buf[MSIX_IRQ_SET_BUF_LEN]; struct vfio_irq_set *irq_set; + int len, rc, vfio_dev_fd; int32_t *fd_ptr; - int len, rc; uint32_t i; - if (intr_handle->max_intr > MAX_INTR_VEC_ID) { + if (rte_intr_handle_max_intr_get(intr_handle) > MAX_INTR_VEC_ID) { otx2_err("Max_intr=%d greater than MAX_INTR_VEC_ID=%d", - intr_handle->max_intr, MAX_INTR_VEC_ID); + rte_intr_handle_max_intr_get(intr_handle), + MAX_INTR_VEC_ID); return -ERANGE; } len = sizeof(struct vfio_irq_set) + - sizeof(int32_t) * intr_handle->max_intr; + sizeof(int32_t) * rte_intr_handle_max_intr_get(intr_handle); irq_set = (struct vfio_irq_set *)irq_set_buf; irq_set->argsz = len; irq_set->start = 0; - irq_set->count = intr_handle->max_intr; + irq_set->count = rte_intr_handle_max_intr_get(intr_handle); irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER; irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX; @@ -116,7 +122,8 @@ irq_init(struct rte_intr_handle *intr_handle) for (i = 0; i < irq_set->count; i++) fd_ptr[i] = -1; - rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle); + rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (rc) otx2_err("Failed to set irqs vector rc=%d", rc); @@ -131,7 +138,8 @@ int otx2_disable_irqs(struct rte_intr_handle *intr_handle) { /* Clear max_intr to indicate re-init next time */ - intr_handle->max_intr = 0; + if (rte_intr_handle_max_intr_set(intr_handle, 0)) + return -1; return rte_intr_disable(intr_handle); } @@ -143,42 +151,50 @@ int otx2_register_irq(struct rte_intr_handle *intr_handle, rte_intr_callback_fn cb, void *data, unsigned int vec) { - struct rte_intr_handle tmp_handle; - int rc; + struct rte_intr_handle *tmp_handle; + uint32_t nb_efd, tmp_nb_efd; + int rc, fd; /* If no max_intr read from VFIO */ - if (intr_handle->max_intr == 0) { + if (rte_intr_handle_max_intr_get(intr_handle) == 0) { irq_get_info(intr_handle); irq_init(intr_handle); } - if (vec > intr_handle->max_intr) { + if (vec > (uint32_t)rte_intr_handle_max_intr_get(intr_handle)) { otx2_err("Vector=%d greater than max_intr=%d", vec, - intr_handle->max_intr); + rte_intr_handle_max_intr_get(intr_handle)); return -EINVAL; } - tmp_handle = *intr_handle; + tmp_handle = intr_handle; /* Create new eventfd for interrupt vector */ - tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); - if (tmp_handle.fd == -1) + fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); + if (fd == -1) return -ENODEV; + if (rte_intr_handle_fd_set(tmp_handle, fd)) + return errno; + /* Register vector interrupt callback */ - rc = rte_intr_callback_register(&tmp_handle, cb, data); + rc = rte_intr_callback_register(tmp_handle, cb, data); if (rc) { otx2_err("Failed to register vector:0x%x irq callback.", vec); return rc; } - intr_handle->efds[vec] = tmp_handle.fd; - intr_handle->nb_efd = (vec > intr_handle->nb_efd) ? - vec : intr_handle->nb_efd; - if ((intr_handle->nb_efd + 1) > intr_handle->max_intr) - intr_handle->max_intr = intr_handle->nb_efd + 1; + rte_intr_handle_efds_index_set(intr_handle, vec, fd); + nb_efd = (vec > (uint32_t)rte_intr_handle_nb_efd_get(intr_handle)) ? + vec : (uint32_t)rte_intr_handle_nb_efd_get(intr_handle); + rte_intr_handle_nb_efd_set(intr_handle, nb_efd); + + tmp_nb_efd = rte_intr_handle_nb_efd_get(intr_handle) + 1; + if (tmp_nb_efd > (uint32_t)rte_intr_handle_max_intr_get(intr_handle)) + rte_intr_handle_max_intr_set(intr_handle, tmp_nb_efd); - otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", - vec, intr_handle->nb_efd, intr_handle->max_intr); + otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec, + rte_intr_handle_nb_efd_get(intr_handle), + rte_intr_handle_max_intr_get(intr_handle)); /* Enable MSIX vectors to VFIO */ return irq_config(intr_handle, vec); @@ -192,24 +208,27 @@ void otx2_unregister_irq(struct rte_intr_handle *intr_handle, rte_intr_callback_fn cb, void *data, unsigned int vec) { - struct rte_intr_handle tmp_handle; + struct rte_intr_handle *tmp_handle; uint8_t retries = 5; /* 5 ms */ - int rc; + int rc, fd; - if (vec > intr_handle->max_intr) { + if (vec > (uint32_t)rte_intr_handle_max_intr_get(intr_handle)) { otx2_err("Error unregistering MSI-X interrupts vec:%d > %d", - vec, intr_handle->max_intr); + vec, rte_intr_handle_max_intr_get(intr_handle)); return; } - tmp_handle = *intr_handle; - tmp_handle.fd = intr_handle->efds[vec]; - if (tmp_handle.fd == -1) + tmp_handle = intr_handle; + fd = rte_intr_handle_efds_index_get(intr_handle, vec); + if (fd == -1) + return; + + if (rte_intr_handle_fd_set(tmp_handle, fd)) return; do { - /* Un-register callback func from eal lib */ - rc = rte_intr_callback_unregister(&tmp_handle, cb, data); + /* Un-register callback func from platform lib */ + rc = rte_intr_callback_unregister(tmp_handle, cb, data); /* Retry only if -EAGAIN */ if (rc != -EAGAIN) break; @@ -218,18 +237,18 @@ otx2_unregister_irq(struct rte_intr_handle *intr_handle, } while (retries); if (rc < 0) { - otx2_err("Error unregistering MSI-X intr vec %d cb, rc=%d", - vec, rc); + otx2_err("Error unregistering MSI-X vec %d cb, rc=%d", vec, rc); return; } - otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", - vec, intr_handle->nb_efd, intr_handle->max_intr); + otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec, + rte_intr_handle_nb_efd_get(intr_handle), + rte_intr_handle_max_intr_get(intr_handle)); - if (intr_handle->efds[vec] != -1) - close(intr_handle->efds[vec]); + if (rte_intr_handle_efds_index_get(intr_handle, vec) != -1) + close(rte_intr_handle_efds_index_get(intr_handle, vec)); /* Disable MSIX vectors from VFIO */ - intr_handle->efds[vec] = -1; + rte_intr_handle_efds_index_set(intr_handle, vec, -1); irq_config(intr_handle, vec); } diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c index bf90d095fe..d5d6b5bad7 100644 --- a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c +++ b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c @@ -36,7 +36,7 @@ otx2_cpt_lf_err_intr_unregister(const struct rte_cryptodev *dev, uint16_t msix_off, uintptr_t base) { struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); - struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct rte_intr_handle *handle = pci_dev->intr_handle; /* Disable error interrupts */ otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1C); @@ -65,7 +65,7 @@ otx2_cpt_lf_err_intr_register(const struct rte_cryptodev *dev, uint16_t msix_off, uintptr_t base) { struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); - struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct rte_intr_handle *handle = pci_dev->intr_handle; int ret; /* Disable error interrupts */ diff --git a/drivers/event/octeontx2/otx2_evdev_irq.c b/drivers/event/octeontx2/otx2_evdev_irq.c index a2033646e6..9b7ad27b04 100644 --- a/drivers/event/octeontx2/otx2_evdev_irq.c +++ b/drivers/event/octeontx2/otx2_evdev_irq.c @@ -29,7 +29,7 @@ sso_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t ggrp_msixoff, uintptr_t base) { struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev); - struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct rte_intr_handle *handle = pci_dev->intr_handle; int rc, vec; vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP; @@ -66,7 +66,7 @@ ssow_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t gws_msixoff, uintptr_t base) { struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev); - struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct rte_intr_handle *handle = pci_dev->intr_handle; int rc, vec; vec = gws_msixoff + SSOW_LF_INT_VEC_IOP; @@ -86,7 +86,7 @@ sso_lf_unregister_irq(const struct rte_eventdev *event_dev, uint16_t ggrp_msixoff, uintptr_t base) { struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev); - struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct rte_intr_handle *handle = pci_dev->intr_handle; int vec; vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP; @@ -101,7 +101,7 @@ ssow_lf_unregister_irq(const struct rte_eventdev *event_dev, uint16_t gws_msixoff, uintptr_t base) { struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev); - struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct rte_intr_handle *handle = pci_dev->intr_handle; int vec; vec = gws_msixoff + SSOW_LF_INT_VEC_IOP; @@ -198,7 +198,7 @@ static int tim_lf_register_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff, uintptr_t base) { - struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct rte_intr_handle *handle = pci_dev->intr_handle; int rc, vec; vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT; @@ -226,7 +226,7 @@ static void tim_lf_unregister_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff, uintptr_t base) { - struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct rte_intr_handle *handle = pci_dev->intr_handle; int vec; vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT; diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c index fb630fecf8..f63dc06ef2 100644 --- a/drivers/mempool/octeontx2/otx2_mempool.c +++ b/drivers/mempool/octeontx2/otx2_mempool.c @@ -301,7 +301,7 @@ otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev) lf->pf_func = dev->pf_func; lf->npa_msixoff = npa_msixoff; - lf->intr_handle = &pci_dev->intr_handle; + lf->intr_handle = pci_dev->intr_handle; lf->pci_dev = pci_dev; idev->npa_pf_func = dev->pf_func; diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c index 0ce35eb519..03c37960eb 100644 --- a/drivers/net/atlantic/atl_ethdev.c +++ b/drivers/net/atlantic/atl_ethdev.c @@ -360,7 +360,7 @@ eth_atl_dev_init(struct rte_eth_dev *eth_dev) { struct atl_adapter *adapter = eth_dev->data->dev_private; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); int err = 0; @@ -479,7 +479,7 @@ atl_dev_start(struct rte_eth_dev *dev) { struct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint32_t intr_vector = 0; int status; int err; @@ -525,10 +525,10 @@ atl_dev_start(struct rte_eth_dev *dev) } } - if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) { - intr_handle->intr_vec = rte_zmalloc("intr_vec", - dev->data->nb_rx_queues * sizeof(int), 0); - if (intr_handle->intr_vec == NULL) { + if (rte_intr_dp_is_en(intr_handle) && + !rte_intr_handle_vec_list_base(intr_handle)) { + if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec", + dev->data->nb_rx_queues)) { PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues" " intr_vec", dev->data->nb_rx_queues); return -ENOMEM; @@ -608,7 +608,7 @@ atl_dev_stop(struct rte_eth_dev *dev) struct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; PMD_INIT_FUNC_TRACE(); dev->data->dev_started = 0; @@ -638,10 +638,8 @@ atl_dev_stop(struct rte_eth_dev *dev) /* Clean datapath event and queue/vec mapping */ rte_intr_efd_disable(intr_handle); - if (intr_handle->intr_vec != NULL) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); return 0; } @@ -692,7 +690,7 @@ static int atl_dev_close(struct rte_eth_dev *dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct aq_hw_s *hw; int ret; diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c index 623fa5e5ff..f32619e05c 100644 --- a/drivers/net/avp/avp_ethdev.c +++ b/drivers/net/avp/avp_ethdev.c @@ -711,7 +711,7 @@ avp_dev_interrupt_handler(void *data) status); /* re-enable UIO interrupt handling */ - ret = rte_intr_ack(&pci_dev->intr_handle); + ret = rte_intr_ack(pci_dev->intr_handle); if (ret < 0) { PMD_DRV_LOG(ERR, "Failed to re-enable UIO interrupts, ret=%d\n", ret); @@ -730,7 +730,7 @@ avp_dev_enable_interrupts(struct rte_eth_dev *eth_dev) return -EINVAL; /* enable UIO interrupt handling */ - ret = rte_intr_enable(&pci_dev->intr_handle); + ret = rte_intr_enable(pci_dev->intr_handle); if (ret < 0) { PMD_DRV_LOG(ERR, "Failed to enable UIO interrupts, ret=%d\n", ret); @@ -759,7 +759,7 @@ avp_dev_disable_interrupts(struct rte_eth_dev *eth_dev) RTE_PTR_ADD(registers, RTE_AVP_INTERRUPT_MASK_OFFSET)); /* enable UIO interrupt handling */ - ret = rte_intr_disable(&pci_dev->intr_handle); + ret = rte_intr_disable(pci_dev->intr_handle); if (ret < 0) { PMD_DRV_LOG(ERR, "Failed to disable UIO interrupts, ret=%d\n", ret); @@ -776,7 +776,7 @@ avp_dev_setup_interrupts(struct rte_eth_dev *eth_dev) int ret; /* register a callback handler with UIO for interrupt notifications */ - ret = rte_intr_callback_register(&pci_dev->intr_handle, + ret = rte_intr_callback_register(pci_dev->intr_handle, avp_dev_interrupt_handler, (void *)eth_dev); if (ret < 0) { diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c index 9cb4818af1..c26e0a199e 100644 --- a/drivers/net/axgbe/axgbe_ethdev.c +++ b/drivers/net/axgbe/axgbe_ethdev.c @@ -313,7 +313,7 @@ axgbe_dev_interrupt_handler(void *param) } } /* Unmask interrupts since disabled after generation */ - rte_intr_ack(&pdata->pci_dev->intr_handle); + rte_intr_ack(pdata->pci_dev->intr_handle); } /* @@ -374,7 +374,7 @@ axgbe_dev_start(struct rte_eth_dev *dev) } /* enable uio/vfio intr/eventfd mapping */ - rte_intr_enable(&pdata->pci_dev->intr_handle); + rte_intr_enable(pdata->pci_dev->intr_handle); /* phy start*/ pdata->phy_if.phy_start(pdata); @@ -404,7 +404,7 @@ axgbe_dev_stop(struct rte_eth_dev *dev) PMD_INIT_FUNC_TRACE(); - rte_intr_disable(&pdata->pci_dev->intr_handle); + rte_intr_disable(pdata->pci_dev->intr_handle); if (rte_bit_relaxed_get32(AXGBE_STOPPED, &pdata->dev_state)) return 0; @@ -2323,7 +2323,7 @@ eth_axgbe_dev_init(struct rte_eth_dev *eth_dev) return ret; } - rte_intr_callback_register(&pci_dev->intr_handle, + rte_intr_callback_register(pci_dev->intr_handle, axgbe_dev_interrupt_handler, (void *)eth_dev); PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x", @@ -2347,8 +2347,8 @@ axgbe_dev_close(struct rte_eth_dev *eth_dev) axgbe_dev_clear_queues(eth_dev); /* disable uio intr before callback unregister */ - rte_intr_disable(&pci_dev->intr_handle); - rte_intr_callback_unregister(&pci_dev->intr_handle, + rte_intr_disable(pci_dev->intr_handle); + rte_intr_callback_unregister(pci_dev->intr_handle, axgbe_dev_interrupt_handler, (void *)eth_dev); diff --git a/drivers/net/axgbe/axgbe_mdio.c b/drivers/net/axgbe/axgbe_mdio.c index 4f98e695ae..35ffda84f1 100644 --- a/drivers/net/axgbe/axgbe_mdio.c +++ b/drivers/net/axgbe/axgbe_mdio.c @@ -933,7 +933,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata) } /* Disable auto-negotiation interrupt */ - rte_intr_disable(&pdata->pci_dev->intr_handle); + rte_intr_disable(pdata->pci_dev->intr_handle); /* Start auto-negotiation in a supported mode */ if (axgbe_use_mode(pdata, AXGBE_MODE_KR)) { @@ -951,7 +951,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata) } else if (axgbe_use_mode(pdata, AXGBE_MODE_SGMII_100)) { axgbe_set_mode(pdata, AXGBE_MODE_SGMII_100); } else { - rte_intr_enable(&pdata->pci_dev->intr_handle); + rte_intr_enable(pdata->pci_dev->intr_handle); return -EINVAL; } @@ -964,7 +964,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata) pdata->kx_state = AXGBE_RX_BPA; /* Re-enable auto-negotiation interrupt */ - rte_intr_enable(&pdata->pci_dev->intr_handle); + rte_intr_enable(pdata->pci_dev->intr_handle); axgbe_an37_enable_interrupts(pdata); axgbe_an_init(pdata); diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c index 463886f17a..a34b2f078b 100644 --- a/drivers/net/bnx2x/bnx2x_ethdev.c +++ b/drivers/net/bnx2x/bnx2x_ethdev.c @@ -134,7 +134,7 @@ bnx2x_interrupt_handler(void *param) PMD_DEBUG_PERIODIC_LOG(INFO, sc, "Interrupt handled"); bnx2x_interrupt_action(dev, 1); - rte_intr_ack(&sc->pci_dev->intr_handle); + rte_intr_ack(sc->pci_dev->intr_handle); } static void bnx2x_periodic_start(void *param) @@ -234,10 +234,10 @@ bnx2x_dev_start(struct rte_eth_dev *dev) } if (IS_PF(sc)) { - rte_intr_callback_register(&sc->pci_dev->intr_handle, + rte_intr_callback_register(sc->pci_dev->intr_handle, bnx2x_interrupt_handler, (void *)dev); - if (rte_intr_enable(&sc->pci_dev->intr_handle)) + if (rte_intr_enable(sc->pci_dev->intr_handle)) PMD_DRV_LOG(ERR, sc, "rte_intr_enable failed"); } @@ -262,8 +262,8 @@ bnx2x_dev_stop(struct rte_eth_dev *dev) bnx2x_dev_rxtx_init_dummy(dev); if (IS_PF(sc)) { - rte_intr_disable(&sc->pci_dev->intr_handle); - rte_intr_callback_unregister(&sc->pci_dev->intr_handle, + rte_intr_disable(sc->pci_dev->intr_handle); + rte_intr_callback_unregister(sc->pci_dev->intr_handle, bnx2x_interrupt_handler, (void *)dev); /* stop the periodic callout */ diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index de34a2f0bb..02598d8030 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -729,7 +729,7 @@ static int bnxt_alloc_prev_ring_stats(struct bnxt *bp) static int bnxt_start_nic(struct bnxt *bp) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(bp->eth_dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint32_t intr_vector = 0; uint32_t queue_id, base = BNXT_MISC_VEC_ID; uint32_t vec = BNXT_MISC_VEC_ID; @@ -831,12 +831,10 @@ static int bnxt_start_nic(struct bnxt *bp) return rc; } - if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) { - intr_handle->intr_vec = - rte_zmalloc("intr_vec", - bp->eth_dev->data->nb_rx_queues * - sizeof(int), 0); - if (intr_handle->intr_vec == NULL) { + if (rte_intr_dp_is_en(intr_handle) && + !rte_intr_handle_vec_list_base(intr_handle)) { + if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec", + bp->eth_dev->data->nb_rx_queues)) { PMD_DRV_LOG(ERR, "Failed to allocate %d rx_queues" " intr_vec", bp->eth_dev->data->nb_rx_queues); rc = -ENOMEM; @@ -844,13 +842,15 @@ static int bnxt_start_nic(struct bnxt *bp) } PMD_DRV_LOG(DEBUG, "intr_handle->intr_vec = %p " "intr_handle->nb_efd = %d intr_handle->max_intr = %d\n", - intr_handle->intr_vec, intr_handle->nb_efd, - intr_handle->max_intr); + rte_intr_handle_vec_list_base(intr_handle), + rte_intr_handle_nb_efd_get(intr_handle), + rte_intr_handle_max_intr_get(intr_handle)); for (queue_id = 0; queue_id < bp->eth_dev->data->nb_rx_queues; queue_id++) { - intr_handle->intr_vec[queue_id] = - vec + BNXT_RX_VEC_START; - if (vec < base + intr_handle->nb_efd - 1) + rte_intr_handle_vec_list_index_set(intr_handle, + queue_id, vec + BNXT_RX_VEC_START); + if (vec < base + rte_intr_handle_nb_efd_get(intr_handle) + - 1) vec++; } } @@ -1459,7 +1459,7 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev) { struct bnxt *bp = eth_dev->data->dev_private; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct rte_eth_link link; int ret; @@ -1501,10 +1501,8 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev) /* Clean queue intr-vector mapping */ rte_intr_efd_disable(intr_handle); - if (intr_handle->intr_vec != NULL) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); bnxt_hwrm_port_clr_stats(bp); bnxt_free_tx_mbufs(bp); diff --git a/drivers/net/bnxt/bnxt_irq.c b/drivers/net/bnxt/bnxt_irq.c index 122a1f9908..508abfc844 100644 --- a/drivers/net/bnxt/bnxt_irq.c +++ b/drivers/net/bnxt/bnxt_irq.c @@ -67,7 +67,7 @@ void bnxt_int_handler(void *param) int bnxt_free_int(struct bnxt *bp) { - struct rte_intr_handle *intr_handle = &bp->pdev->intr_handle; + struct rte_intr_handle *intr_handle = bp->pdev->intr_handle; struct bnxt_irq *irq = bp->irq_tbl; int rc = 0; @@ -170,7 +170,7 @@ int bnxt_setup_int(struct bnxt *bp) int bnxt_request_int(struct bnxt *bp) { - struct rte_intr_handle *intr_handle = &bp->pdev->intr_handle; + struct rte_intr_handle *intr_handle = bp->pdev->intr_handle; struct bnxt_irq *irq = bp->irq_tbl; int rc = 0; diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index 27d670f843..1f4336b4a7 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -219,7 +219,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev) PMD_INIT_FUNC_TRACE(); dpaa_dev = container_of(rdev, struct rte_dpaa_device, device); - intr_handle = &dpaa_dev->intr_handle; + intr_handle = dpaa_dev->intr_handle; __fif = container_of(fif, struct __fman_if, __if); /* Rx offloads which are enabled by default */ @@ -276,13 +276,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev) } /* if the interrupts were configured on this devices*/ - if (intr_handle && intr_handle->fd) { + if (intr_handle && rte_intr_handle_fd_get(intr_handle)) { if (dev->data->dev_conf.intr_conf.lsc != 0) rte_intr_callback_register(intr_handle, dpaa_interrupt_handler, (void *)dev); - ret = dpaa_intr_enable(__fif->node_name, intr_handle->fd); + ret = dpaa_intr_enable(__fif->node_name, + rte_intr_handle_fd_get(intr_handle)); if (ret) { if (dev->data->dev_conf.intr_conf.lsc != 0) { rte_intr_callback_unregister(intr_handle, @@ -389,9 +390,10 @@ static void dpaa_interrupt_handler(void *param) int bytes_read; dpaa_dev = container_of(rdev, struct rte_dpaa_device, device); - intr_handle = &dpaa_dev->intr_handle; + intr_handle = dpaa_dev->intr_handle; - bytes_read = read(intr_handle->fd, &buf, sizeof(uint64_t)); + bytes_read = read(rte_intr_handle_fd_get(intr_handle), &buf, + sizeof(uint64_t)); if (bytes_read < 0) DPAA_PMD_ERR("Error reading eventfd\n"); dpaa_eth_link_update(dev, 0); @@ -461,7 +463,7 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev) } dpaa_dev = container_of(rdev, struct rte_dpaa_device, device); - intr_handle = &dpaa_dev->intr_handle; + intr_handle = dpaa_dev->intr_handle; __fif = container_of(fif, struct __fman_if, __if); ret = dpaa_eth_dev_stop(dev); @@ -470,7 +472,7 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev) if (link->link_status && !link->link_autoneg) dpaa_restart_link_autoneg(__fif->node_name); - if (intr_handle && intr_handle->fd && + if (intr_handle && rte_intr_handle_fd_get(intr_handle) && dev->data->dev_conf.intr_conf.lsc != 0) { dpaa_intr_disable(__fif->node_name); rte_intr_callback_unregister(intr_handle, @@ -1101,20 +1103,33 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, dpaa_dev = container_of(rdev, struct rte_dpaa_device, device); - dev->intr_handle = &dpaa_dev->intr_handle; - dev->intr_handle->intr_vec = rte_zmalloc(NULL, - dpaa_push_mode_max_queue, 0); - if (!dev->intr_handle->intr_vec) { + dev->intr_handle = dpaa_dev->intr_handle; + if (rte_intr_handle_vec_list_alloc(dev->intr_handle, + NULL, dpaa_push_mode_max_queue)) { DPAA_PMD_ERR("intr_vec alloc failed"); return -ENOMEM; } - dev->intr_handle->nb_efd = dpaa_push_mode_max_queue; - dev->intr_handle->max_intr = dpaa_push_mode_max_queue; + if (rte_intr_handle_nb_efd_set(dev->intr_handle, + dpaa_push_mode_max_queue)) + return -rte_errno; + + if (rte_intr_handle_max_intr_set(dev->intr_handle, + dpaa_push_mode_max_queue)) + return -rte_errno; } - dev->intr_handle->type = RTE_INTR_HANDLE_EXT; - dev->intr_handle->intr_vec[queue_idx] = queue_idx + 1; - dev->intr_handle->efds[queue_idx] = q_fd; + if (rte_intr_handle_type_set(dev->intr_handle, + RTE_INTR_HANDLE_EXT)) + return -rte_errno; + + if (rte_intr_handle_vec_list_index_set(dev->intr_handle, + queue_idx, queue_idx + 1)) + return -rte_errno; + + if (rte_intr_handle_efds_index_set(dev->intr_handle, queue_idx, + q_fd)) + return -rte_errno; + rxq->q_fd = q_fd; } rxq->bp_array = rte_dpaa_bpid_info; diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index c12169578e..f95d3bbf53 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -1157,7 +1157,7 @@ dpaa2_dev_start(struct rte_eth_dev *dev) struct rte_intr_handle *intr_handle; dpaa2_dev = container_of(rdev, struct rte_dpaa2_device, device); - intr_handle = &dpaa2_dev->intr_handle; + intr_handle = dpaa2_dev->intr_handle; PMD_INIT_FUNC_TRACE(); @@ -1228,8 +1228,8 @@ dpaa2_dev_start(struct rte_eth_dev *dev) } /* if the interrupts were configured on this devices*/ - if (intr_handle && (intr_handle->fd) && - (dev->data->dev_conf.intr_conf.lsc != 0)) { + if (intr_handle && rte_intr_handle_fd_get(intr_handle) && + dev->data->dev_conf.intr_conf.lsc != 0) { /* Registering LSC interrupt handler */ rte_intr_callback_register(intr_handle, dpaa2_interrupt_handler, @@ -1268,8 +1268,8 @@ dpaa2_dev_stop(struct rte_eth_dev *dev) PMD_INIT_FUNC_TRACE(); /* reset interrupt callback */ - if (intr_handle && (intr_handle->fd) && - (dev->data->dev_conf.intr_conf.lsc != 0)) { + if (intr_handle && rte_intr_handle_fd_get(intr_handle) && + dev->data->dev_conf.intr_conf.lsc != 0) { /*disable dpni irqs */ dpaa2_eth_setup_irqs(dev, 0); diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c index a0ca371b02..fe20fc5e6c 100644 --- a/drivers/net/e1000/em_ethdev.c +++ b/drivers/net/e1000/em_ethdev.c @@ -237,7 +237,7 @@ static int eth_em_dev_init(struct rte_eth_dev *eth_dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct e1000_adapter *adapter = E1000_DEV_PRIVATE(eth_dev->data->dev_private); struct e1000_hw *hw = @@ -525,7 +525,7 @@ eth_em_start(struct rte_eth_dev *dev) struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int ret, mask; uint32_t intr_vector = 0; uint32_t *speeds; @@ -575,12 +575,10 @@ eth_em_start(struct rte_eth_dev *dev) } if (rte_intr_dp_is_en(intr_handle)) { - intr_handle->intr_vec = - rte_zmalloc("intr_vec", - dev->data->nb_rx_queues * sizeof(int), 0); - if (intr_handle->intr_vec == NULL) { + if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec", + dev->data->nb_rx_queues)) { PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues" - " intr_vec", dev->data->nb_rx_queues); + " intr_vec", dev->data->nb_rx_queues); return -ENOMEM; } @@ -718,7 +716,7 @@ eth_em_stop(struct rte_eth_dev *dev) struct rte_eth_link link; struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; dev->data->dev_started = 0; @@ -752,10 +750,8 @@ eth_em_stop(struct rte_eth_dev *dev) /* Clean datapath event and queue/vec mapping */ rte_intr_efd_disable(intr_handle); - if (intr_handle->intr_vec != NULL) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); return 0; } @@ -767,7 +763,7 @@ eth_em_close(struct rte_eth_dev *dev) struct e1000_adapter *adapter = E1000_DEV_PRIVATE(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int ret; if (rte_eal_process_type() != RTE_PROC_PRIMARY) @@ -1008,7 +1004,7 @@ eth_em_rx_queue_intr_enable(struct rte_eth_dev *dev, __rte_unused uint16_t queue { struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; em_rxq_intr_enable(hw); rte_intr_ack(intr_handle); diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c index 10ee0f3341..66a6380496 100644 --- a/drivers/net/e1000/igb_ethdev.c +++ b/drivers/net/e1000/igb_ethdev.c @@ -515,7 +515,7 @@ igb_intr_enable(struct rte_eth_dev *dev) struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; if (rte_intr_allow_others(intr_handle) && dev->data->dev_conf.intr_conf.lsc != 0) { @@ -532,7 +532,7 @@ igb_intr_disable(struct rte_eth_dev *dev) struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; if (rte_intr_allow_others(intr_handle) && dev->data->dev_conf.intr_conf.lsc != 0) { @@ -853,12 +853,12 @@ eth_igb_dev_init(struct rte_eth_dev *eth_dev) eth_dev->data->port_id, pci_dev->id.vendor_id, pci_dev->id.device_id); - rte_intr_callback_register(&pci_dev->intr_handle, + rte_intr_callback_register(pci_dev->intr_handle, eth_igb_interrupt_handler, (void *)eth_dev); /* enable uio/vfio intr/eventfd mapping */ - rte_intr_enable(&pci_dev->intr_handle); + rte_intr_enable(pci_dev->intr_handle); /* enable support intr */ igb_intr_enable(eth_dev); @@ -1001,7 +1001,7 @@ eth_igbvf_dev_init(struct rte_eth_dev *eth_dev) eth_dev->data->port_id, pci_dev->id.vendor_id, pci_dev->id.device_id, "igb_mac_82576_vf"); - intr_handle = &pci_dev->intr_handle; + intr_handle = pci_dev->intr_handle; rte_intr_callback_register(intr_handle, eth_igbvf_interrupt_handler, eth_dev); @@ -1205,7 +1205,7 @@ eth_igb_start(struct rte_eth_dev *dev) struct e1000_adapter *adapter = E1000_DEV_PRIVATE(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int ret, mask; uint32_t intr_vector = 0; uint32_t ctrl_ext; @@ -1264,11 +1264,11 @@ eth_igb_start(struct rte_eth_dev *dev) return -1; } - if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) { - intr_handle->intr_vec = - rte_zmalloc("intr_vec", - dev->data->nb_rx_queues * sizeof(int), 0); - if (intr_handle->intr_vec == NULL) { + /* Allocate the vector list */ + if (rte_intr_dp_is_en(intr_handle) && + !rte_intr_handle_vec_list_base(intr_handle)) { + if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec", + dev->data->nb_rx_queues)) { PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues" " intr_vec", dev->data->nb_rx_queues); return -ENOMEM; @@ -1427,7 +1427,7 @@ eth_igb_stop(struct rte_eth_dev *dev) struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); struct rte_eth_link link; - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct e1000_adapter *adapter = E1000_DEV_PRIVATE(dev->data->dev_private); @@ -1471,10 +1471,8 @@ eth_igb_stop(struct rte_eth_dev *dev) /* Clean datapath event and queue/vec mapping */ rte_intr_efd_disable(intr_handle); - if (intr_handle->intr_vec != NULL) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); adapter->stopped = true; dev->data->dev_started = 0; @@ -1514,7 +1512,7 @@ eth_igb_close(struct rte_eth_dev *dev) struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_eth_link link; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct e1000_filter_info *filter_info = E1000_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private); int ret; @@ -1540,10 +1538,9 @@ eth_igb_close(struct rte_eth_dev *dev) igb_dev_free_queues(dev); - if (intr_handle->intr_vec) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + /* Cleanup vector list */ + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); memset(&link, 0, sizeof(link)); rte_eth_linkstatus_set(dev, &link); @@ -2784,7 +2781,7 @@ static int eth_igb_rxq_interrupt_setup(struct rte_eth_dev *dev) struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int misc_shift = rte_intr_allow_others(intr_handle) ? 1 : 0; struct rte_eth_dev_info dev_info; @@ -3301,7 +3298,7 @@ igbvf_dev_start(struct rte_eth_dev *dev) struct e1000_adapter *adapter = E1000_DEV_PRIVATE(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int ret; uint32_t intr_vector = 0; @@ -3332,11 +3329,11 @@ igbvf_dev_start(struct rte_eth_dev *dev) return ret; } - if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) { - intr_handle->intr_vec = - rte_zmalloc("intr_vec", - dev->data->nb_rx_queues * sizeof(int), 0); - if (!intr_handle->intr_vec) { + /* Allocate the vector list */ + if (rte_intr_dp_is_en(intr_handle) && + !rte_intr_handle_vec_list_base(intr_handle)) { + if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec", + dev->data->nb_rx_queues)) { PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues" " intr_vec", dev->data->nb_rx_queues); return -ENOMEM; @@ -3358,7 +3355,7 @@ static int igbvf_dev_stop(struct rte_eth_dev *dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct e1000_adapter *adapter = E1000_DEV_PRIVATE(dev->data->dev_private); @@ -3382,10 +3379,10 @@ igbvf_dev_stop(struct rte_eth_dev *dev) /* Clean datapath event and queue/vec mapping */ rte_intr_efd_disable(intr_handle); - if (intr_handle->intr_vec) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + + /* Clean vector list */ + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); adapter->stopped = true; dev->data->dev_started = 0; @@ -3423,7 +3420,7 @@ igbvf_dev_close(struct rte_eth_dev *dev) memset(&addr, 0, sizeof(addr)); igbvf_default_mac_addr_set(dev, &addr); - rte_intr_callback_unregister(&pci_dev->intr_handle, + rte_intr_callback_unregister(pci_dev->intr_handle, eth_igbvf_interrupt_handler, (void *)dev); @@ -5145,7 +5142,7 @@ eth_igb_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint32_t vec = E1000_MISC_VEC_ID; if (rte_intr_allow_others(intr_handle)) @@ -5165,7 +5162,7 @@ eth_igb_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint32_t vec = E1000_MISC_VEC_ID; if (rte_intr_allow_others(intr_handle)) @@ -5243,7 +5240,7 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev) uint32_t base = E1000_MISC_VEC_ID; uint32_t misc_shift = 0; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; /* won't configure msix register if no mapping is done * between intr vector and event fd @@ -5284,8 +5281,9 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev) E1000_WRITE_REG(hw, E1000_GPIE, E1000_GPIE_MSIX_MODE | E1000_GPIE_PBA | E1000_GPIE_EIAME | E1000_GPIE_NSICR); - intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) << - misc_shift; + intr_mask = + RTE_LEN2MASK(rte_intr_handle_nb_efd_get(intr_handle), + uint32_t) << misc_shift; if (dev->data->dev_conf.intr_conf.lsc != 0) intr_mask |= (1 << IGB_MSIX_OTHER_INTR_VEC); @@ -5303,8 +5301,8 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev) /* use EIAM to auto-mask when MSI-X interrupt * is asserted, this saves a register write for every interrupt */ - intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) << - misc_shift; + intr_mask = RTE_LEN2MASK(rte_intr_handle_nb_efd_get(intr_handle), + uint32_t) << misc_shift; if (dev->data->dev_conf.intr_conf.lsc != 0) intr_mask |= (1 << IGB_MSIX_OTHER_INTR_VEC); @@ -5314,8 +5312,8 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev) for (queue_id = 0; queue_id < dev->data->nb_rx_queues; queue_id++) { eth_igb_assign_msix_vector(hw, 0, queue_id, vec); - intr_handle->intr_vec[queue_id] = vec; - if (vec < base + intr_handle->nb_efd - 1) + rte_intr_handle_vec_list_index_set(intr_handle, queue_id, vec); + if (vec < base + rte_intr_handle_nb_efd_get(intr_handle) - 1) vec++; } diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 4cebf60a68..f73d7bb5bc 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -473,7 +473,7 @@ static void ena_config_debug_area(struct ena_adapter *adapter) static int ena_close(struct rte_eth_dev *dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct ena_adapter *adapter = dev->data->dev_private; int ret = 0; @@ -947,7 +947,7 @@ static int ena_stop(struct rte_eth_dev *dev) struct ena_adapter *adapter = dev->data->dev_private; struct ena_com_dev *ena_dev = &adapter->ena_dev; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int rc; /* Cannot free memory in secondary process */ @@ -969,10 +969,10 @@ static int ena_stop(struct rte_eth_dev *dev) rte_intr_disable(intr_handle); rte_intr_efd_disable(intr_handle); - if (intr_handle->intr_vec != NULL) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + + /* Cleanup vector list */ + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); rte_intr_enable(intr_handle); @@ -988,7 +988,7 @@ static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring) struct ena_adapter *adapter = ring->adapter; struct ena_com_dev *ena_dev = &adapter->ena_dev; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct ena_com_create_io_ctx ctx = /* policy set to _HOST just to satisfy icc compiler */ { ENA_ADMIN_PLACEMENT_POLICY_HOST, @@ -1008,7 +1008,10 @@ static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring) ena_qid = ENA_IO_RXQ_IDX(ring->id); ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_RX; if (rte_intr_dp_is_en(intr_handle)) - ctx.msix_vector = intr_handle->intr_vec[ring->id]; + ctx.msix_vector = + rte_intr_handle_vec_list_index_get(intr_handle, + ring->id); + for (i = 0; i < ring->ring_size; i++) ring->empty_rx_reqs[i] = i; } @@ -1665,7 +1668,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) pci_dev->addr.devid, pci_dev->addr.function); - intr_handle = &pci_dev->intr_handle; + intr_handle = pci_dev->intr_handle; adapter->regs = pci_dev->mem_resource[ENA_REGS_BAR].addr; adapter->dev_mem_base = pci_dev->mem_resource[ENA_MEM_BAR].addr; @@ -2817,7 +2820,7 @@ static int ena_parse_devargs(struct ena_adapter *adapter, static int ena_setup_rx_intr(struct rte_eth_dev *dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int rc; uint16_t vectors_nb, i; bool rx_intr_requested = dev->data->dev_conf.intr_conf.rxq; @@ -2844,9 +2847,9 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev) goto enable_intr; } - intr_handle->intr_vec = rte_zmalloc("intr_vec", - dev->data->nb_rx_queues * sizeof(*intr_handle->intr_vec), 0); - if (intr_handle->intr_vec == NULL) { + /* Allocate the vector list */ + if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec", + dev->data->nb_rx_queues)) { PMD_DRV_LOG(ERR, "Failed to allocate interrupt vector for %d queues\n", dev->data->nb_rx_queues); @@ -2865,7 +2868,9 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev) } for (i = 0; i < vectors_nb; ++i) - intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i; + if (rte_intr_handle_vec_list_index_set(intr_handle, i, + RTE_INTR_VEC_RXTX_OFFSET + i)) + goto disable_intr_efd; rte_intr_enable(intr_handle); return 0; @@ -2873,8 +2878,7 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev) disable_intr_efd: rte_intr_efd_disable(intr_handle); free_intr_vec: - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; + rte_intr_handle_vec_list_free(intr_handle); enable_intr: rte_intr_enable(intr_handle); return rc; diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c index 2affd380c6..0045dbd3f5 100644 --- a/drivers/net/enic/enic_main.c +++ b/drivers/net/enic/enic_main.c @@ -448,7 +448,7 @@ enic_intr_handler(void *arg) rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL); enic_log_q_error(enic); /* Re-enable irq in case of INTx */ - rte_intr_ack(&enic->pdev->intr_handle); + rte_intr_ack(enic->pdev->intr_handle); } static int enic_rxq_intr_init(struct enic *enic) @@ -477,14 +477,16 @@ static int enic_rxq_intr_init(struct enic *enic) " interrupts\n"); return err; } - intr_handle->intr_vec = rte_zmalloc("enic_intr_vec", - rxq_intr_count * sizeof(int), 0); - if (intr_handle->intr_vec == NULL) { + + if (rte_intr_handle_vec_list_alloc(intr_handle, "enic_intr_vec", + rxq_intr_count)) { dev_err(enic, "Failed to allocate intr_vec\n"); return -ENOMEM; } for (i = 0; i < rxq_intr_count; i++) - intr_handle->intr_vec[i] = i + ENICPMD_RXQ_INTR_OFFSET; + if (rte_intr_handle_vec_list_index_set(intr_handle, i, + i + ENICPMD_RXQ_INTR_OFFSET)) + return -rte_errno; return 0; } @@ -494,10 +496,9 @@ static void enic_rxq_intr_deinit(struct enic *enic) intr_handle = enic->rte_dev->intr_handle; rte_intr_efd_disable(intr_handle); - if (intr_handle->intr_vec != NULL) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); } static void enic_prep_wq_for_simple_tx(struct enic *enic, uint16_t queue_idx) @@ -667,10 +668,10 @@ int enic_enable(struct enic *enic) vnic_dev_enable_wait(enic->vdev); /* Register and enable error interrupt */ - rte_intr_callback_register(&(enic->pdev->intr_handle), + rte_intr_callback_register(enic->pdev->intr_handle, enic_intr_handler, (void *)enic->rte_dev); - rte_intr_enable(&(enic->pdev->intr_handle)); + rte_intr_enable(enic->pdev->intr_handle); /* Unmask LSC interrupt */ vnic_intr_unmask(&enic->intr[ENICPMD_LSC_INTR_OFFSET]); @@ -1112,8 +1113,8 @@ int enic_disable(struct enic *enic) (void)vnic_intr_masked(&enic->intr[i]); /* flush write */ } enic_rxq_intr_deinit(enic); - rte_intr_disable(&enic->pdev->intr_handle); - rte_intr_callback_unregister(&enic->pdev->intr_handle, + rte_intr_disable(enic->pdev->intr_handle); + rte_intr_callback_unregister(enic->pdev->intr_handle, enic_intr_handler, (void *)enic->rte_dev); diff --git a/drivers/net/failsafe/failsafe.c b/drivers/net/failsafe/failsafe.c index 8216063a3d..b5c53e4286 100644 --- a/drivers/net/failsafe/failsafe.c +++ b/drivers/net/failsafe/failsafe.c @@ -266,11 +266,25 @@ fs_eth_dev_create(struct rte_vdev_device *vdev) mac->addr_bytes[4], mac->addr_bytes[5]); dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC | RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS; - PRIV(dev)->intr_handle = (struct rte_intr_handle){ - .fd = -1, - .type = RTE_INTR_HANDLE_EXT, - }; + + /* Allocate interrupt instance */ + PRIV(dev)->intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + true); + if (!PRIV(dev)->intr_handle) { + ERROR("Failed to allocate intr handle"); + goto cancel_alarm; + } + + if (rte_intr_handle_fd_set(PRIV(dev)->intr_handle, -1)) + goto cancel_alarm; + + if (rte_intr_handle_type_set(PRIV(dev)->intr_handle, + RTE_INTR_HANDLE_EXT)) + goto cancel_alarm; + rte_eth_dev_probing_finish(dev); + return 0; cancel_alarm: failsafe_hotplug_alarm_cancel(dev); @@ -299,6 +313,8 @@ fs_rte_eth_free(const char *name) return 0; /* port already released */ ret = failsafe_eth_dev_close(dev); rte_eth_dev_release_port(dev); + if (PRIV(dev)->intr_handle) + rte_intr_handle_instance_free(PRIV(dev)->intr_handle); return ret; } diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c index 602c04033c..57df67c6c5 100644 --- a/drivers/net/failsafe/failsafe_intr.c +++ b/drivers/net/failsafe/failsafe_intr.c @@ -410,12 +410,11 @@ fs_rx_intr_vec_uninstall(struct fs_priv *priv) { struct rte_intr_handle *intr_handle; - intr_handle = &priv->intr_handle; - if (intr_handle->intr_vec != NULL) { - free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } - intr_handle->nb_efd = 0; + intr_handle = priv->intr_handle; + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); + + rte_intr_handle_nb_efd_set(intr_handle, 0); } /** @@ -439,11 +438,10 @@ fs_rx_intr_vec_install(struct fs_priv *priv) rxqs_n = priv->data->nb_rx_queues; n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID); count = 0; - intr_handle = &priv->intr_handle; - RTE_ASSERT(intr_handle->intr_vec == NULL); + intr_handle = priv->intr_handle; + RTE_ASSERT(rte_intr_handle_vec_list_base(intr_handle) == NULL); /* Allocate the interrupt vector of the failsafe Rx proxy interrupts */ - intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0])); - if (intr_handle->intr_vec == NULL) { + if (rte_intr_handle_vec_list_alloc(intr_handle, NULL, n)) { fs_rx_intr_vec_uninstall(priv); rte_errno = ENOMEM; ERROR("Failed to allocate memory for interrupt vector," @@ -456,9 +454,9 @@ fs_rx_intr_vec_install(struct fs_priv *priv) /* Skip queues that cannot request interrupts. */ if (rxq == NULL || rxq->event_fd < 0) { /* Use invalid intr_vec[] index to disable entry. */ - intr_handle->intr_vec[i] = - RTE_INTR_VEC_RXTX_OFFSET + - RTE_MAX_RXTX_INTR_VEC_ID; + if (rte_intr_handle_vec_list_index_set(intr_handle, i, + RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID)) + return -rte_errno; continue; } if (count >= RTE_MAX_RXTX_INTR_VEC_ID) { @@ -469,15 +467,24 @@ fs_rx_intr_vec_install(struct fs_priv *priv) fs_rx_intr_vec_uninstall(priv); return -rte_errno; } - intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count; - intr_handle->efds[count] = rxq->event_fd; + if (rte_intr_handle_vec_list_index_set(intr_handle, i, + RTE_INTR_VEC_RXTX_OFFSET + count)) + return -rte_errno; + + if (rte_intr_handle_efds_index_set(intr_handle, count, + rxq->event_fd)) + return -rte_errno; count++; } if (count == 0) { fs_rx_intr_vec_uninstall(priv); } else { - intr_handle->nb_efd = count; - intr_handle->efd_counter_size = sizeof(uint64_t); + if (rte_intr_handle_nb_efd_set(intr_handle, count)) + return -rte_errno; + + if (rte_intr_handle_efd_counter_size_set(intr_handle, + sizeof(uint64_t))) + return -rte_errno; } return 0; } @@ -499,7 +506,7 @@ failsafe_rx_intr_uninstall(struct rte_eth_dev *dev) struct rte_intr_handle *intr_handle; priv = PRIV(dev); - intr_handle = &priv->intr_handle; + intr_handle = priv->intr_handle; rte_intr_free_epoll_fd(intr_handle); fs_rx_event_proxy_uninstall(priv); fs_rx_intr_vec_uninstall(priv); @@ -530,6 +537,6 @@ failsafe_rx_intr_install(struct rte_eth_dev *dev) fs_rx_intr_vec_uninstall(priv); return -rte_errno; } - dev->intr_handle = &priv->intr_handle; + dev->intr_handle = priv->intr_handle; return 0; } diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c index 5ff33e03e0..a3f5f34dd3 100644 --- a/drivers/net/failsafe/failsafe_ops.c +++ b/drivers/net/failsafe/failsafe_ops.c @@ -398,15 +398,24 @@ fs_rx_queue_setup(struct rte_eth_dev *dev, * For the time being, fake as if we are using MSIX interrupts, * this will cause rte_intr_efd_enable to allocate an eventfd for us. */ - struct rte_intr_handle intr_handle = { - .type = RTE_INTR_HANDLE_VFIO_MSIX, - .efds = { -1, }, - }; + struct rte_intr_handle *intr_handle; struct sub_device *sdev; struct rxq *rxq; uint8_t i; int ret; + intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + false); + if (!intr_handle) + return -ENOMEM; + + if (rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSIX)) + return -rte_errno; + + if (rte_intr_handle_efds_index_set(intr_handle, 0, -1)) + return -rte_errno; + fs_lock(dev, 0); if (rx_conf->rx_deferred_start) { FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_PROBED) { @@ -440,12 +449,12 @@ fs_rx_queue_setup(struct rte_eth_dev *dev, rxq->info.nb_desc = nb_rx_desc; rxq->priv = PRIV(dev); rxq->sdev = PRIV(dev)->subs; - ret = rte_intr_efd_enable(&intr_handle, 1); + ret = rte_intr_efd_enable(intr_handle, 1); if (ret < 0) { fs_unlock(dev, 0); return ret; } - rxq->event_fd = intr_handle.efds[0]; + rxq->event_fd = rte_intr_handle_efds_index_get(intr_handle, 0); dev->data->rx_queues[rx_queue_id] = rxq; FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_ACTIVE) { ret = rte_eth_rx_queue_setup(PORT_ID(sdev), @@ -458,10 +467,12 @@ fs_rx_queue_setup(struct rte_eth_dev *dev, } } fs_unlock(dev, 0); + rte_intr_handle_instance_free(intr_handle); return 0; free_rxq: fs_rx_queue_release(rxq); fs_unlock(dev, 0); + rte_intr_handle_instance_free(intr_handle); return ret; } diff --git a/drivers/net/failsafe/failsafe_private.h b/drivers/net/failsafe/failsafe_private.h index cd39d103c6..a80f5e2caf 100644 --- a/drivers/net/failsafe/failsafe_private.h +++ b/drivers/net/failsafe/failsafe_private.h @@ -166,7 +166,7 @@ struct fs_priv { struct rte_ether_addr *mcast_addrs; /* current capabilities */ struct rte_eth_dev_owner my_owner; /* Unique owner. */ - struct rte_intr_handle intr_handle; /* Port interrupt handle. */ + struct rte_intr_handle *intr_handle; /* Port interrupt handle. */ /* * Fail-safe state machine. * This level will be tracking state of the EAL and eth diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c index 3236290e40..6f58c2543f 100644 --- a/drivers/net/fm10k/fm10k_ethdev.c +++ b/drivers/net/fm10k/fm10k_ethdev.c @@ -32,7 +32,8 @@ #define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1) /* default 1:1 map from queue ID to interrupt vector ID */ -#define Q2V(pci_dev, queue_id) ((pci_dev)->intr_handle.intr_vec[queue_id]) +#define Q2V(pci_dev, queue_id) \ + (rte_intr_handle_vec_list_index_get((pci_dev)->intr_handle, queue_id)) /* First 64 Logical ports for PF/VMDQ, second 64 for Flow director */ #define MAX_LPORT_NUM 128 @@ -690,7 +691,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev) struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct fm10k_macvlan_filter_info *macvlan; struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pdev->intr_handle; + struct rte_intr_handle *intr_handle = pdev->intr_handle; int i, ret; struct fm10k_rx_queue *rxq; uint64_t base_addr; @@ -1158,7 +1159,7 @@ fm10k_dev_stop(struct rte_eth_dev *dev) { struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pdev->intr_handle; + struct rte_intr_handle *intr_handle = pdev->intr_handle; int i; PMD_INIT_FUNC_TRACE(); @@ -1187,8 +1188,7 @@ fm10k_dev_stop(struct rte_eth_dev *dev) } /* Clean datapath event and queue/vec mapping */ rte_intr_efd_disable(intr_handle); - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; + rte_intr_handle_vec_list_free(intr_handle); return 0; } @@ -2368,7 +2368,7 @@ fm10k_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) else FM10K_WRITE_REG(hw, FM10K_VFITR(Q2V(pdev, queue_id)), FM10K_ITR_AUTOMASK | FM10K_ITR_MASK_CLEAR); - rte_intr_ack(&pdev->intr_handle); + rte_intr_ack(pdev->intr_handle); return 0; } @@ -2393,7 +2393,7 @@ fm10k_dev_rxq_interrupt_setup(struct rte_eth_dev *dev) { struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pdev->intr_handle; + struct rte_intr_handle *intr_handle = pdev->intr_handle; uint32_t intr_vector, vec; uint16_t queue_id; int result = 0; @@ -2421,15 +2421,17 @@ fm10k_dev_rxq_interrupt_setup(struct rte_eth_dev *dev) } if (rte_intr_dp_is_en(intr_handle) && !result) { - intr_handle->intr_vec = rte_zmalloc("intr_vec", - dev->data->nb_rx_queues * sizeof(int), 0); - if (intr_handle->intr_vec) { + if (!rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec", + dev->data->nb_rx_queues)) { for (queue_id = 0, vec = FM10K_RX_VEC_START; queue_id < dev->data->nb_rx_queues; queue_id++) { - intr_handle->intr_vec[queue_id] = vec; - if (vec < intr_handle->nb_efd - 1 - + FM10K_RX_VEC_START) + rte_intr_handle_vec_list_index_set(intr_handle, + queue_id, vec); + int nb_efd = + rte_intr_handle_nb_efd_get(intr_handle); + if (vec < (uint32_t)nb_efd - 1 + + FM10K_RX_VEC_START) vec++; } } else { @@ -2788,7 +2790,7 @@ fm10k_dev_close(struct rte_eth_dev *dev) { struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pdev->intr_handle; + struct rte_intr_handle *intr_handle = pdev->intr_handle; int ret; PMD_INIT_FUNC_TRACE(); @@ -3054,7 +3056,7 @@ eth_fm10k_dev_init(struct rte_eth_dev *dev) { struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pdev->intr_handle; + struct rte_intr_handle *intr_handle = pdev->intr_handle; int diag, i; struct fm10k_macvlan_filter_info *macvlan; diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c index 1a72401546..89c576a902 100644 --- a/drivers/net/hinic/hinic_pmd_ethdev.c +++ b/drivers/net/hinic/hinic_pmd_ethdev.c @@ -1225,13 +1225,13 @@ static void hinic_disable_interrupt(struct rte_eth_dev *dev) hinic_set_msix_state(nic_dev->hwdev, 0, HINIC_MSIX_DISABLE); /* disable rte interrupt */ - ret = rte_intr_disable(&pci_dev->intr_handle); + ret = rte_intr_disable(pci_dev->intr_handle); if (ret) PMD_DRV_LOG(ERR, "Disable intr failed: %d", ret); do { ret = - rte_intr_callback_unregister(&pci_dev->intr_handle, + rte_intr_callback_unregister(pci_dev->intr_handle, hinic_dev_interrupt_handler, dev); if (ret >= 0) { break; @@ -3134,7 +3134,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev) } /* register callback func to eal lib */ - rc = rte_intr_callback_register(&pci_dev->intr_handle, + rc = rte_intr_callback_register(pci_dev->intr_handle, hinic_dev_interrupt_handler, (void *)eth_dev); if (rc) { @@ -3144,7 +3144,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev) } /* enable uio/vfio intr/eventfd mapping */ - rc = rte_intr_enable(&pci_dev->intr_handle); + rc = rte_intr_enable(pci_dev->intr_handle); if (rc) { PMD_DRV_LOG(ERR, "Enable rte interrupt failed, dev_name: %s", eth_dev->data->name); @@ -3174,7 +3174,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev) return 0; enable_intr_fail: - (void)rte_intr_callback_unregister(&pci_dev->intr_handle, + (void)rte_intr_callback_unregister(pci_dev->intr_handle, hinic_dev_interrupt_handler, (void *)eth_dev); diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index 7d37004972..1b46e81b5b 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -5275,7 +5275,7 @@ hns3_init_pf(struct rte_eth_dev *eth_dev) hns3_config_all_msix_error(hw, true); - ret = rte_intr_callback_register(&pci_dev->intr_handle, + ret = rte_intr_callback_register(pci_dev->intr_handle, hns3_interrupt_handler, eth_dev); if (ret) { @@ -5288,7 +5288,7 @@ hns3_init_pf(struct rte_eth_dev *eth_dev) goto err_get_config; /* Enable interrupt */ - rte_intr_enable(&pci_dev->intr_handle); + rte_intr_enable(pci_dev->intr_handle); hns3_pf_enable_irq0(hw); /* Get configuration */ @@ -5347,8 +5347,8 @@ hns3_init_pf(struct rte_eth_dev *eth_dev) hns3_tqp_stats_uninit(hw); err_get_config: hns3_pf_disable_irq0(hw); - rte_intr_disable(&pci_dev->intr_handle); - hns3_intr_unregister(&pci_dev->intr_handle, hns3_interrupt_handler, + rte_intr_disable(pci_dev->intr_handle); + hns3_intr_unregister(pci_dev->intr_handle, hns3_interrupt_handler, eth_dev); err_intr_callback_register: err_cmd_init: @@ -5381,8 +5381,8 @@ hns3_uninit_pf(struct rte_eth_dev *eth_dev) hns3_tqp_stats_uninit(hw); hns3_config_mac_tnl_int(hw, false); hns3_pf_disable_irq0(hw); - rte_intr_disable(&pci_dev->intr_handle); - hns3_intr_unregister(&pci_dev->intr_handle, hns3_interrupt_handler, + rte_intr_disable(pci_dev->intr_handle); + hns3_intr_unregister(pci_dev->intr_handle, hns3_interrupt_handler, eth_dev); hns3_config_all_msix_error(hw, false); hns3_cmd_uninit(hw); @@ -5716,7 +5716,7 @@ static int hns3_map_rx_interrupt(struct rte_eth_dev *dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private); uint16_t base = RTE_INTR_VEC_ZERO_OFFSET; uint16_t vec = RTE_INTR_VEC_ZERO_OFFSET; @@ -5739,11 +5739,10 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev) if (rte_intr_efd_enable(intr_handle, intr_vector)) return -EINVAL; - if (intr_handle->intr_vec == NULL) { - intr_handle->intr_vec = - rte_zmalloc("intr_vec", - hw->used_rx_queues * sizeof(int), 0); - if (intr_handle->intr_vec == NULL) { + /* Allocate vector list */ + if (!rte_intr_handle_vec_list_base(intr_handle)) { + if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec", + hw->used_rx_queues)) { hns3_err(hw, "failed to allocate %u rx_queues intr_vec", hw->used_rx_queues); ret = -ENOMEM; @@ -5761,20 +5760,21 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev) HNS3_RING_TYPE_RX, q_id); if (ret) goto bind_vector_error; - intr_handle->intr_vec[q_id] = vec; + + if (rte_intr_handle_vec_list_index_set(intr_handle, q_id, vec)) + goto bind_vector_error; /* * If there are not enough efds (e.g. not enough interrupt), * remaining queues will be bond to the last interrupt. */ - if (vec < base + intr_handle->nb_efd - 1) + if (vec < base + rte_intr_handle_nb_efd_get(intr_handle) - 1) vec++; } rte_intr_enable(intr_handle); return 0; bind_vector_error: - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; + rte_intr_handle_vec_list_free(intr_handle); alloc_intr_vec_error: rte_intr_efd_disable(intr_handle); return ret; @@ -5785,7 +5785,7 @@ hns3_restore_rx_interrupt(struct hns3_hw *hw) { struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id]; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint16_t q_id; int ret; @@ -5795,8 +5795,9 @@ hns3_restore_rx_interrupt(struct hns3_hw *hw) if (rte_intr_dp_is_en(intr_handle)) { for (q_id = 0; q_id < hw->used_rx_queues; q_id++) { ret = hns3_bind_ring_with_vector(hw, - intr_handle->intr_vec[q_id], true, - HNS3_RING_TYPE_RX, q_id); + rte_intr_handle_vec_list_index_get(intr_handle, + q_id), + true, HNS3_RING_TYPE_RX, q_id); if (ret) return ret; } @@ -5939,7 +5940,7 @@ static void hns3_unmap_rx_interrupt(struct rte_eth_dev *dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct hns3_adapter *hns = dev->data->dev_private; struct hns3_hw *hw = &hns->hw; uint8_t base = RTE_INTR_VEC_ZERO_OFFSET; @@ -5959,16 +5960,15 @@ hns3_unmap_rx_interrupt(struct rte_eth_dev *dev) (void)hns3_bind_ring_with_vector(hw, vec, false, HNS3_RING_TYPE_RX, q_id); - if (vec < base + intr_handle->nb_efd - 1) + if (vec < base + rte_intr_handle_nb_efd_get(intr_handle) + - 1) vec++; } } /* Clean datapath event and queue/vec mapping */ rte_intr_efd_disable(intr_handle); - if (intr_handle->intr_vec) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); } static int diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index 8d9b7979c8..2ee2a837dd 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -1985,7 +1985,7 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev) hns3vf_clear_event_cause(hw, 0); - ret = rte_intr_callback_register(&pci_dev->intr_handle, + ret = rte_intr_callback_register(pci_dev->intr_handle, hns3vf_interrupt_handler, eth_dev); if (ret) { PMD_INIT_LOG(ERR, "Failed to register intr: %d", ret); @@ -1993,7 +1993,7 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev) } /* Enable interrupt */ - rte_intr_enable(&pci_dev->intr_handle); + rte_intr_enable(pci_dev->intr_handle); hns3vf_enable_irq0(hw); /* Get configuration from PF */ @@ -2045,8 +2045,8 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev) err_get_config: hns3vf_disable_irq0(hw); - rte_intr_disable(&pci_dev->intr_handle); - hns3_intr_unregister(&pci_dev->intr_handle, hns3vf_interrupt_handler, + rte_intr_disable(pci_dev->intr_handle); + hns3_intr_unregister(pci_dev->intr_handle, hns3vf_interrupt_handler, eth_dev); err_intr_callback_register: err_cmd_init: @@ -2074,8 +2074,8 @@ hns3vf_uninit_vf(struct rte_eth_dev *eth_dev) hns3_flow_uninit(eth_dev); hns3_tqp_stats_uninit(hw); hns3vf_disable_irq0(hw); - rte_intr_disable(&pci_dev->intr_handle); - hns3_intr_unregister(&pci_dev->intr_handle, hns3vf_interrupt_handler, + rte_intr_disable(pci_dev->intr_handle); + hns3_intr_unregister(pci_dev->intr_handle, hns3vf_interrupt_handler, eth_dev); hns3_cmd_uninit(hw); hns3_cmd_destroy_queue(hw); @@ -2118,7 +2118,7 @@ hns3vf_unmap_rx_interrupt(struct rte_eth_dev *dev) { struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint8_t base = RTE_INTR_VEC_ZERO_OFFSET; uint8_t vec = RTE_INTR_VEC_ZERO_OFFSET; uint16_t q_id; @@ -2136,16 +2136,17 @@ hns3vf_unmap_rx_interrupt(struct rte_eth_dev *dev) (void)hns3vf_bind_ring_with_vector(hw, vec, false, HNS3_RING_TYPE_RX, q_id); - if (vec < base + intr_handle->nb_efd - 1) + if (vec < base + rte_intr_handle_nb_efd_get(intr_handle) + - 1) vec++; } } /* Clean datapath event and queue/vec mapping */ rte_intr_efd_disable(intr_handle); - if (intr_handle->intr_vec) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + + /* Cleanup vector list */ + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); } static int @@ -2301,7 +2302,7 @@ static int hns3vf_map_rx_interrupt(struct rte_eth_dev *dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private); uint8_t base = RTE_INTR_VEC_ZERO_OFFSET; uint8_t vec = RTE_INTR_VEC_ZERO_OFFSET; @@ -2324,11 +2325,10 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev) if (rte_intr_efd_enable(intr_handle, intr_vector)) return -EINVAL; - if (intr_handle->intr_vec == NULL) { - intr_handle->intr_vec = - rte_zmalloc("intr_vec", - hw->used_rx_queues * sizeof(int), 0); - if (intr_handle->intr_vec == NULL) { + /* Allocate vector list */ + if (!rte_intr_handle_vec_list_base(intr_handle)) { + if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec", + hw->used_rx_queues)) { hns3_err(hw, "Failed to allocate %u rx_queues" " intr_vec", hw->used_rx_queues); ret = -ENOMEM; @@ -2346,20 +2346,22 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev) HNS3_RING_TYPE_RX, q_id); if (ret) goto vf_bind_vector_error; - intr_handle->intr_vec[q_id] = vec; + + if (rte_intr_handle_vec_list_index_set(intr_handle, q_id, vec)) + goto vf_bind_vector_error; + /* * If there are not enough efds (e.g. not enough interrupt), * remaining queues will be bond to the last interrupt. */ - if (vec < base + intr_handle->nb_efd - 1) + if (vec < base + rte_intr_handle_nb_efd_get(intr_handle) - 1) vec++; } rte_intr_enable(intr_handle); return 0; vf_bind_vector_error: - free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; + rte_intr_handle_vec_list_free(intr_handle); vf_alloc_intr_vec_error: rte_intr_efd_disable(intr_handle); return ret; @@ -2370,7 +2372,7 @@ hns3vf_restore_rx_interrupt(struct hns3_hw *hw) { struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id]; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint16_t q_id; int ret; @@ -2380,8 +2382,9 @@ hns3vf_restore_rx_interrupt(struct hns3_hw *hw) if (rte_intr_dp_is_en(intr_handle)) { for (q_id = 0; q_id < hw->used_rx_queues; q_id++) { ret = hns3vf_bind_ring_with_vector(hw, - intr_handle->intr_vec[q_id], true, - HNS3_RING_TYPE_RX, q_id); + rte_intr_handle_vec_list_index_get(intr_handle, + q_id), + true, HNS3_RING_TYPE_RX, q_id); if (ret) return ret; } @@ -2845,7 +2848,7 @@ hns3vf_reinit_dev(struct hns3_adapter *hns) int ret; if (hw->reset.level == HNS3_VF_FULL_RESET) { - rte_intr_disable(&pci_dev->intr_handle); + rte_intr_disable(pci_dev->intr_handle); ret = hns3vf_set_bus_master(pci_dev, true); if (ret < 0) { hns3_err(hw, "failed to set pci bus, ret = %d", ret); @@ -2871,7 +2874,7 @@ hns3vf_reinit_dev(struct hns3_adapter *hns) hns3_err(hw, "Failed to enable msix"); } - rte_intr_enable(&pci_dev->intr_handle); + rte_intr_enable(pci_dev->intr_handle); } ret = hns3_reset_all_tqps(hns); diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index 0f222b37f9..eabec24dcc 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -1038,7 +1038,7 @@ int hns3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private); if (dev->data->dev_conf.intr_conf.rxq == 0) diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index 7b230e2ed1..05f2b3c53c 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -1451,7 +1451,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused) } i40e_set_default_ptype_table(dev); pci_dev = RTE_ETH_DEV_TO_PCI(dev); - intr_handle = &pci_dev->intr_handle; + intr_handle = pci_dev->intr_handle; rte_eth_copy_pci_info(dev, pci_dev); dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS; @@ -1985,7 +1985,7 @@ i40e_vsi_queues_unbind_intr(struct i40e_vsi *vsi) { struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); uint16_t msix_vect = vsi->msix_intr; uint16_t i; @@ -2101,10 +2101,11 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx) { struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); uint16_t msix_vect = vsi->msix_intr; - uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd); + uint16_t nb_msix = RTE_MIN(vsi->nb_msix, + rte_intr_handle_nb_efd_get(intr_handle)); uint16_t queue_idx = 0; int record = 0; int i; @@ -2154,8 +2155,8 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx) vsi->nb_used_qps - i, itr_idx); for (; !!record && i < vsi->nb_used_qps; i++) - intr_handle->intr_vec[queue_idx + i] = - msix_vect; + rte_intr_handle_vec_list_index_set(intr_handle, + queue_idx + i, msix_vect); break; } /* 1:1 queue/msix_vect mapping */ @@ -2163,7 +2164,9 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx) vsi->base_queue + i, 1, itr_idx); if (!!record) - intr_handle->intr_vec[queue_idx + i] = msix_vect; + if (rte_intr_handle_vec_list_index_set(intr_handle, + queue_idx + i, msix_vect)) + return -rte_errno; msix_vect++; nb_msix--; @@ -2177,7 +2180,7 @@ i40e_vsi_enable_queues_intr(struct i40e_vsi *vsi) { struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); struct i40e_pf *pf = I40E_VSI_TO_PF(vsi); uint16_t msix_intr, i; @@ -2204,7 +2207,7 @@ i40e_vsi_disable_queues_intr(struct i40e_vsi *vsi) { struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); struct i40e_pf *pf = I40E_VSI_TO_PF(vsi); uint16_t msix_intr, i; @@ -2370,7 +2373,7 @@ i40e_dev_start(struct rte_eth_dev *dev) struct i40e_vsi *main_vsi = pf->main_vsi; int ret, i; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint32_t intr_vector = 0; struct i40e_vsi *vsi; uint16_t nb_rxq, nb_txq; @@ -2388,12 +2391,10 @@ i40e_dev_start(struct rte_eth_dev *dev) return ret; } - if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) { - intr_handle->intr_vec = - rte_zmalloc("intr_vec", - dev->data->nb_rx_queues * sizeof(int), - 0); - if (!intr_handle->intr_vec) { + if (rte_intr_dp_is_en(intr_handle) && + !rte_intr_handle_vec_list_base(intr_handle)) { + if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec", + dev->data->nb_rx_queues)) { PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues intr_vec", dev->data->nb_rx_queues); @@ -2534,7 +2535,7 @@ i40e_dev_stop(struct rte_eth_dev *dev) struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct i40e_vsi *main_vsi = pf->main_vsi; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int i; if (hw->adapter_stopped == 1) @@ -2575,10 +2576,10 @@ i40e_dev_stop(struct rte_eth_dev *dev) /* Clean datapath event and queue/vec mapping */ rte_intr_efd_disable(intr_handle); - if (intr_handle->intr_vec) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + + /* Cleanup vector list */ + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); /* reset hierarchy commit */ pf->tm_conf.committed = false; @@ -2597,7 +2598,7 @@ i40e_dev_close(struct rte_eth_dev *dev) struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct i40e_mirror_rule *p_mirror; struct i40e_filter_control_settings settings; struct rte_flow *p_flow; @@ -11404,11 +11405,11 @@ static int i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); uint16_t msix_intr; - msix_intr = intr_handle->intr_vec[queue_id]; + msix_intr = rte_intr_handle_vec_list_index_get(intr_handle, queue_id); if (msix_intr == I40E_MISC_VEC_ID) I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0, I40E_PFINT_DYN_CTL0_INTENA_MASK | @@ -11423,7 +11424,7 @@ i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) I40E_PFINT_DYN_CTLN_ITR_INDX_MASK); I40E_WRITE_FLUSH(hw); - rte_intr_ack(&pci_dev->intr_handle); + rte_intr_ack(pci_dev->intr_handle); return 0; } @@ -11432,11 +11433,11 @@ static int i40e_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); uint16_t msix_intr; - msix_intr = intr_handle->intr_vec[queue_id]; + msix_intr = rte_intr_handle_vec_list_index_get(intr_handle, queue_id); if (msix_intr == I40E_MISC_VEC_ID) I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0, I40E_PFINT_DYN_CTL0_ITR_INDX_MASK); diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c index 0cfe13b7b2..4ecc160a75 100644 --- a/drivers/net/i40e/i40e_ethdev_vf.c +++ b/drivers/net/i40e/i40e_ethdev_vf.c @@ -678,7 +678,7 @@ i40evf_config_irq_map(struct rte_eth_dev *dev) uint8_t *cmd_buffer = NULL; struct virtchnl_irq_map_info *map_info; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint32_t vec, cmd_buffer_size, max_vectors, nb_msix, msix_base, i; uint16_t rxq_map[vf->vf_res->max_vectors]; int err; @@ -689,12 +689,14 @@ i40evf_config_irq_map(struct rte_eth_dev *dev) msix_base = I40E_RX_VEC_START; /* For interrupt mode, available vector id is from 1. */ max_vectors = vf->vf_res->max_vectors - 1; - nb_msix = RTE_MIN(max_vectors, intr_handle->nb_efd); + nb_msix = RTE_MIN(max_vectors, + (uint32_t)rte_intr_handle_nb_efd_get(intr_handle)); vec = msix_base; for (i = 0; i < dev->data->nb_rx_queues; i++) { rxq_map[vec] |= 1 << i; - intr_handle->intr_vec[i] = vec++; + rte_intr_handle_vec_list_index_set(intr_handle, i, + vec++); if (vec >= vf->vf_res->max_vectors) vec = msix_base; } @@ -705,7 +707,8 @@ i40evf_config_irq_map(struct rte_eth_dev *dev) for (i = 0; i < dev->data->nb_rx_queues; i++) { rxq_map[msix_base] |= 1 << i; if (rte_intr_dp_is_en(intr_handle)) - intr_handle->intr_vec[i] = msix_base; + rte_intr_handle_vec_list_index_set(intr_handle, + i, msix_base); } } @@ -2003,7 +2006,7 @@ i40evf_enable_queues_intr(struct rte_eth_dev *dev) { struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; if (!rte_intr_allow_others(intr_handle)) { I40E_WRITE_REG(hw, @@ -2023,7 +2026,7 @@ i40evf_disable_queues_intr(struct rte_eth_dev *dev) { struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; if (!rte_intr_allow_others(intr_handle)) { I40E_WRITE_REG(hw, I40E_VFINT_DYN_CTL01, @@ -2039,13 +2042,13 @@ static int i40evf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); uint16_t interval = i40e_calc_itr_interval(0, 0); uint16_t msix_intr; - msix_intr = intr_handle->intr_vec[queue_id]; + msix_intr = rte_intr_handle_vec_list_index_get(intr_handle, queue_id); if (msix_intr == I40E_MISC_VEC_ID) I40E_WRITE_REG(hw, I40E_VFINT_DYN_CTL01, I40E_VFINT_DYN_CTL01_INTENA_MASK | @@ -2072,11 +2075,11 @@ static int i40evf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); uint16_t msix_intr; - msix_intr = intr_handle->intr_vec[queue_id]; + msix_intr = rte_intr_handle_vec_list_index_get(intr_handle, queue_id); if (msix_intr == I40E_MISC_VEC_ID) I40E_WRITE_REG(hw, I40E_VFINT_DYN_CTL01, 0); else @@ -2166,7 +2169,7 @@ i40evf_dev_start(struct rte_eth_dev *dev) struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint32_t intr_vector = 0; PMD_INIT_FUNC_TRACE(); @@ -2185,11 +2188,10 @@ i40evf_dev_start(struct rte_eth_dev *dev) return -1; } - if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) { - intr_handle->intr_vec = - rte_zmalloc("intr_vec", - dev->data->nb_rx_queues * sizeof(int), 0); - if (!intr_handle->intr_vec) { + if (rte_intr_dp_is_en(intr_handle) && + !rte_intr_handle_vec_list_base(intr_handle)) { + if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec", + dev->data->nb_rx_queues)) { PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues" " intr_vec", dev->data->nb_rx_queues); return -ENOMEM; @@ -2243,7 +2245,7 @@ static int i40evf_dev_stop(struct rte_eth_dev *dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); @@ -2260,10 +2262,9 @@ i40evf_dev_stop(struct rte_eth_dev *dev) /* Clean datapath event and queue/vec mapping */ rte_intr_efd_disable(intr_handle); - if (intr_handle->intr_vec) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); /* remove all mac addrs */ i40evf_add_del_all_mac_addr(dev, FALSE); /* remove all multicast addresses */ diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index 574cfe055e..f768fd02b1 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -658,17 +658,17 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev, return -1; } - if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) { - intr_handle->intr_vec = - rte_zmalloc("intr_vec", - dev->data->nb_rx_queues * sizeof(int), 0); - if (!intr_handle->intr_vec) { + if (rte_intr_dp_is_en(intr_handle) && + !rte_intr_handle_vec_list_base(intr_handle)) { + if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec", + dev->data->nb_rx_queues)) { PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec", dev->data->nb_rx_queues); return -1; } } + qv_map = rte_zmalloc("qv_map", dev->data->nb_rx_queues * sizeof(struct iavf_qv_map), 0); if (!qv_map) { @@ -728,7 +728,8 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev, for (i = 0; i < dev->data->nb_rx_queues; i++) { qv_map[i].queue_id = i; qv_map[i].vector_id = vf->msix_base; - intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID; + rte_intr_handle_vec_list_index_set(intr_handle, + i, IAVF_MISC_VEC_ID); } vf->qv_map = qv_map; PMD_DRV_LOG(DEBUG, @@ -738,14 +739,16 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev, /* If Rx interrupt is reuquired, and we can use * multi interrupts, then the vec is from 1 */ - vf->nb_msix = RTE_MIN(intr_handle->nb_efd, - (uint16_t)(vf->vf_res->max_vectors - 1)); + vf->nb_msix = + RTE_MIN(rte_intr_handle_nb_efd_get(intr_handle), + (uint16_t)(vf->vf_res->max_vectors - 1)); vf->msix_base = IAVF_RX_VEC_START; vec = IAVF_RX_VEC_START; for (i = 0; i < dev->data->nb_rx_queues; i++) { qv_map[i].queue_id = i; qv_map[i].vector_id = vec; - intr_handle->intr_vec[i] = vec++; + rte_intr_handle_vec_list_index_set(intr_handle, + i, vec++); if (vec >= vf->nb_msix + IAVF_RX_VEC_START) vec = IAVF_RX_VEC_START; } @@ -909,10 +912,8 @@ iavf_dev_stop(struct rte_eth_dev *dev) /* Disable the interrupt for Rx */ rte_intr_efd_disable(intr_handle); /* Rx interrupt vector mapping free */ - if (intr_handle->intr_vec) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); /* remove all mac addrs */ iavf_add_del_all_mac_addr(adapter, false); @@ -1661,7 +1662,8 @@ iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(adapter); uint16_t msix_intr; - msix_intr = pci_dev->intr_handle.intr_vec[queue_id]; + msix_intr = rte_intr_handle_vec_list_index_get(pci_dev->intr_handle, + queue_id); if (msix_intr == IAVF_MISC_VEC_ID) { PMD_DRV_LOG(INFO, "MISC is also enabled for control"); IAVF_WRITE_REG(hw, IAVF_VFINT_DYN_CTL01, @@ -1679,7 +1681,7 @@ iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) IAVF_WRITE_FLUSH(hw); - rte_intr_ack(&pci_dev->intr_handle); + rte_intr_ack(pci_dev->intr_handle); return 0; } @@ -1691,7 +1693,8 @@ iavf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); uint16_t msix_intr; - msix_intr = pci_dev->intr_handle.intr_vec[queue_id]; + msix_intr = rte_intr_handle_vec_list_index_get(pci_dev->intr_handle, + queue_id); if (msix_intr == IAVF_MISC_VEC_ID) { PMD_DRV_LOG(ERR, "MISC is used for control, cannot disable it"); return -EIO; @@ -2325,12 +2328,12 @@ iavf_dev_init(struct rte_eth_dev *eth_dev) ð_dev->data->mac_addrs[0]); /* register callback func to eal lib */ - rte_intr_callback_register(&pci_dev->intr_handle, + rte_intr_callback_register(pci_dev->intr_handle, iavf_dev_interrupt_handler, (void *)eth_dev); /* enable uio intr after callback register */ - rte_intr_enable(&pci_dev->intr_handle); + rte_intr_enable(pci_dev->intr_handle); /* configure and enable device interrupt */ iavf_enable_irq0(hw); @@ -2351,7 +2354,7 @@ iavf_dev_close(struct rte_eth_dev *dev) { struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct iavf_adapter *adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index 06dc663947..13425f3005 100644 --- a/drivers/net/iavf/iavf_vchnl.c +++ b/drivers/net/iavf/iavf_vchnl.c @@ -1691,9 +1691,9 @@ iavf_request_queues(struct iavf_adapter *adapter, uint16_t num) * disable interrupt to avoid the admin queue message to be read * before iavf_read_msg_from_pf. */ - rte_intr_disable(&pci_dev->intr_handle); + rte_intr_disable(pci_dev->intr_handle); err = iavf_execute_vf_cmd(adapter, &args); - rte_intr_enable(&pci_dev->intr_handle); + rte_intr_enable(pci_dev->intr_handle); if (err) { PMD_DRV_LOG(ERR, "fail to execute command OP_REQUEST_QUEUES"); return err; diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 4c2e0c7216..fc4111fe63 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -535,13 +535,13 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw) rte_spinlock_lock(&hw->vc_cmd_send_lock); - rte_intr_disable(&pci_dev->intr_handle); + rte_intr_disable(pci_dev->intr_handle); ice_dcf_disable_irq0(hw); if (ice_dcf_get_vf_resource(hw) || ice_dcf_get_vf_vsi_map(hw) < 0) err = -1; - rte_intr_enable(&pci_dev->intr_handle); + rte_intr_enable(pci_dev->intr_handle); ice_dcf_enable_irq0(hw); rte_spinlock_unlock(&hw->vc_cmd_send_lock); @@ -680,9 +680,9 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw) } hw->eth_dev = eth_dev; - rte_intr_callback_register(&pci_dev->intr_handle, + rte_intr_callback_register(pci_dev->intr_handle, ice_dcf_dev_interrupt_handler, hw); - rte_intr_enable(&pci_dev->intr_handle); + rte_intr_enable(pci_dev->intr_handle); ice_dcf_enable_irq0(hw); return 0; @@ -704,7 +704,7 @@ void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS) if (hw->tm_conf.committed) { diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index cab7c4da87..2e091a0ec0 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -153,11 +153,10 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev, return -1; } - if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) { - intr_handle->intr_vec = - rte_zmalloc("intr_vec", - dev->data->nb_rx_queues * sizeof(int), 0); - if (!intr_handle->intr_vec) { + if (rte_intr_dp_is_en(intr_handle) && + !rte_intr_handle_vec_list_base(intr_handle)) { + if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec", + dev->data->nb_rx_queues)) { PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec", dev->data->nb_rx_queues); return -1; @@ -202,7 +201,8 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev, hw->msix_base = IAVF_MISC_VEC_ID; for (i = 0; i < dev->data->nb_rx_queues; i++) { hw->rxq_map[hw->msix_base] |= 1 << i; - intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID; + rte_intr_handle_vec_list_index_set(intr_handle, + i, IAVF_MISC_VEC_ID); } PMD_DRV_LOG(DEBUG, "vector %u are mapping to all Rx queues", @@ -212,12 +212,13 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev, * multi interrupts, then the vec is from 1 */ hw->nb_msix = RTE_MIN(hw->vf_res->max_vectors, - intr_handle->nb_efd); + rte_intr_handle_nb_efd_get(intr_handle)); hw->msix_base = IAVF_MISC_VEC_ID; vec = IAVF_MISC_VEC_ID; for (i = 0; i < dev->data->nb_rx_queues; i++) { hw->rxq_map[vec] |= 1 << i; - intr_handle->intr_vec[i] = vec++; + rte_intr_handle_vec_list_index_set(intr_handle, + i, vec++); if (vec >= hw->nb_msix) vec = IAVF_RX_VEC_START; } @@ -614,10 +615,8 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev) ice_dcf_stop_queues(dev); rte_intr_efd_disable(intr_handle); - if (intr_handle->intr_vec) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false); dev->data->dev_link.link_status = ETH_LINK_DOWN; diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index a4cd39c954..6c6caeb4aa 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -2013,7 +2013,7 @@ ice_dev_init(struct rte_eth_dev *dev) ice_set_default_ptype_table(dev); pci_dev = RTE_DEV_TO_PCI(dev->device); - intr_handle = &pci_dev->intr_handle; + intr_handle = pci_dev->intr_handle; pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); pf->dev_data = dev->data; @@ -2204,7 +2204,7 @@ ice_vsi_disable_queues_intr(struct ice_vsi *vsi) { struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id]; struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct ice_hw *hw = ICE_VSI_TO_HW(vsi); uint16_t msix_intr, i; @@ -2234,7 +2234,7 @@ ice_dev_stop(struct rte_eth_dev *dev) struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct ice_vsi *main_vsi = pf->main_vsi; struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint16_t i; /* avoid stopping again */ @@ -2259,10 +2259,8 @@ ice_dev_stop(struct rte_eth_dev *dev) /* Clean datapath event and queue/vec mapping */ rte_intr_efd_disable(intr_handle); - if (intr_handle->intr_vec) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); pf->adapter_stopped = true; dev->data->dev_started = 0; @@ -2276,7 +2274,7 @@ ice_dev_close(struct rte_eth_dev *dev) struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct ice_adapter *ad = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); int ret; @@ -3167,10 +3165,11 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi) { struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id]; struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct ice_hw *hw = ICE_VSI_TO_HW(vsi); uint16_t msix_vect = vsi->msix_intr; - uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd); + uint16_t nb_msix = RTE_MIN(vsi->nb_msix, + rte_intr_handle_nb_efd_get(intr_handle)); uint16_t queue_idx = 0; int record = 0; int i; @@ -3198,8 +3197,9 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi) vsi->nb_used_qps - i); for (; !!record && i < vsi->nb_used_qps; i++) - intr_handle->intr_vec[queue_idx + i] = - msix_vect; + rte_intr_handle_vec_list_index_set(intr_handle, + queue_idx + i, msix_vect); + break; } @@ -3208,7 +3208,9 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi) vsi->base_queue + i, 1); if (!!record) - intr_handle->intr_vec[queue_idx + i] = msix_vect; + rte_intr_handle_vec_list_index_set(intr_handle, + queue_idx + i, + msix_vect); msix_vect++; nb_msix--; @@ -3220,7 +3222,7 @@ ice_vsi_enable_queues_intr(struct ice_vsi *vsi) { struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id]; struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct ice_hw *hw = ICE_VSI_TO_HW(vsi); uint16_t msix_intr, i; @@ -3246,7 +3248,7 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct ice_vsi *vsi = pf->main_vsi; uint32_t intr_vector = 0; @@ -3266,11 +3268,10 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev) return -1; } - if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) { - intr_handle->intr_vec = - rte_zmalloc(NULL, dev->data->nb_rx_queues * sizeof(int), - 0); - if (!intr_handle->intr_vec) { + if (rte_intr_dp_is_en(intr_handle) && + !rte_intr_handle_vec_list_base(intr_handle)) { + if (rte_intr_handle_vec_list_alloc(intr_handle, NULL, + dev->data->nb_rx_queues)) { PMD_DRV_LOG(ERR, "Failed to allocate %d rx_queues intr_vec", dev->data->nb_rx_queues); @@ -4539,19 +4540,19 @@ static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) { struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); uint32_t val; uint16_t msix_intr; - msix_intr = intr_handle->intr_vec[queue_id]; + msix_intr = rte_intr_handle_vec_list_index_get(intr_handle, queue_id); val = GLINT_DYN_CTL_INTENA_M | GLINT_DYN_CTL_CLEARPBA_M | GLINT_DYN_CTL_ITR_INDX_M; val &= ~GLINT_DYN_CTL_WB_ON_ITR_M; ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), val); - rte_intr_ack(&pci_dev->intr_handle); + rte_intr_ack(pci_dev->intr_handle); return 0; } @@ -4560,11 +4561,11 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) { struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); uint16_t msix_intr; - msix_intr = intr_handle->intr_vec[queue_id]; + msix_intr = rte_intr_handle_vec_list_index_get(intr_handle, queue_id); ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), GLINT_DYN_CTL_WB_ON_ITR_M); diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c index 224a095483..86ac297ca3 100644 --- a/drivers/net/igc/igc_ethdev.c +++ b/drivers/net/igc/igc_ethdev.c @@ -384,7 +384,7 @@ igc_intr_other_disable(struct rte_eth_dev *dev) { struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; if (rte_intr_allow_others(intr_handle) && dev->data->dev_conf.intr_conf.lsc) { @@ -404,7 +404,7 @@ igc_intr_other_enable(struct rte_eth_dev *dev) struct igc_interrupt *intr = IGC_DEV_PRIVATE_INTR(dev); struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; if (rte_intr_allow_others(intr_handle) && dev->data->dev_conf.intr_conf.lsc) { @@ -616,7 +616,7 @@ eth_igc_stop(struct rte_eth_dev *dev) struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev); struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct rte_eth_link link; dev->data->dev_started = 0; @@ -668,10 +668,8 @@ eth_igc_stop(struct rte_eth_dev *dev) /* Clean datapath event and queue/vec mapping */ rte_intr_efd_disable(intr_handle); - if (intr_handle->intr_vec != NULL) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); return 0; } @@ -731,7 +729,7 @@ igc_configure_msix_intr(struct rte_eth_dev *dev) { struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint32_t intr_mask; uint32_t vec = IGC_MISC_VEC_ID; @@ -755,8 +753,8 @@ igc_configure_msix_intr(struct rte_eth_dev *dev) IGC_WRITE_REG(hw, IGC_GPIE, IGC_GPIE_MSIX_MODE | IGC_GPIE_PBA | IGC_GPIE_EIAME | IGC_GPIE_NSICR); - intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) << - misc_shift; + intr_mask = RTE_LEN2MASK(rte_intr_handle_nb_efd_get(intr_handle), + uint32_t) << misc_shift; if (dev->data->dev_conf.intr_conf.lsc) intr_mask |= (1u << IGC_MSIX_OTHER_INTR_VEC); @@ -773,8 +771,8 @@ igc_configure_msix_intr(struct rte_eth_dev *dev) for (i = 0; i < dev->data->nb_rx_queues; i++) { igc_write_ivar(hw, i, 0, vec); - intr_handle->intr_vec[i] = vec; - if (vec < base + intr_handle->nb_efd - 1) + rte_intr_handle_vec_list_index_set(intr_handle, i, vec); + if (vec < base + rte_intr_handle_nb_efd_get(intr_handle) - 1) vec++; } @@ -810,7 +808,7 @@ igc_rxq_interrupt_setup(struct rte_eth_dev *dev) uint32_t mask; struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int misc_shift = rte_intr_allow_others(intr_handle) ? 1 : 0; /* won't configure msix register if no mapping is done @@ -819,7 +817,8 @@ igc_rxq_interrupt_setup(struct rte_eth_dev *dev) if (!rte_intr_dp_is_en(intr_handle)) return; - mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) << misc_shift; + mask = RTE_LEN2MASK(rte_intr_handle_nb_efd_get(intr_handle), uint32_t) + << misc_shift; IGC_WRITE_REG(hw, IGC_EIMS, mask); } @@ -913,7 +912,7 @@ eth_igc_start(struct rte_eth_dev *dev) struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev); struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint32_t *speeds; int ret; @@ -951,10 +950,10 @@ eth_igc_start(struct rte_eth_dev *dev) return -1; } - if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) { - intr_handle->intr_vec = rte_zmalloc("intr_vec", - dev->data->nb_rx_queues * sizeof(int), 0); - if (intr_handle->intr_vec == NULL) { + if (rte_intr_dp_is_en(intr_handle) && + !rte_intr_handle_vec_list_base(intr_handle)) { + if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec", + dev->data->nb_rx_queues)) { PMD_DRV_LOG(ERR, "Failed to allocate %d rx_queues intr_vec", dev->data->nb_rx_queues); @@ -1169,7 +1168,7 @@ static int eth_igc_close(struct rte_eth_dev *dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev); struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev); int retry = 0; @@ -1339,11 +1338,11 @@ eth_igc_dev_init(struct rte_eth_dev *dev) dev->data->port_id, pci_dev->id.vendor_id, pci_dev->id.device_id); - rte_intr_callback_register(&pci_dev->intr_handle, + rte_intr_callback_register(pci_dev->intr_handle, eth_igc_interrupt_handler, (void *)dev); /* enable uio/vfio intr/eventfd mapping */ - rte_intr_enable(&pci_dev->intr_handle); + rte_intr_enable(pci_dev->intr_handle); /* enable support intr */ igc_intr_other_enable(dev); @@ -2100,7 +2099,7 @@ eth_igc_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) { struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint32_t vec = IGC_MISC_VEC_ID; if (rte_intr_allow_others(intr_handle)) @@ -2119,7 +2118,7 @@ eth_igc_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) { struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint32_t vec = IGC_MISC_VEC_ID; if (rte_intr_allow_others(intr_handle)) diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c index e620793966..3076fe7eab 100644 --- a/drivers/net/ionic/ionic_ethdev.c +++ b/drivers/net/ionic/ionic_ethdev.c @@ -1071,7 +1071,7 @@ static int ionic_configure_intr(struct ionic_adapter *adapter) { struct rte_pci_device *pci_dev = adapter->pci_dev; - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int err; IONIC_PRINT(DEBUG, "Configuring %u intrs", adapter->nintrs); @@ -1085,11 +1085,9 @@ ionic_configure_intr(struct ionic_adapter *adapter) IONIC_PRINT(DEBUG, "Packet I/O interrupt on datapath is enabled"); - if (!intr_handle->intr_vec) { - intr_handle->intr_vec = rte_zmalloc("intr_vec", - adapter->nintrs * sizeof(int), 0); - - if (!intr_handle->intr_vec) { + if (!rte_intr_handle_vec_list_base(intr_handle)) { + if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec", + adapter->nintrs)) { IONIC_PRINT(ERR, "Failed to allocate %u vectors", adapter->nintrs); return -ENOMEM; @@ -1122,7 +1120,7 @@ static void ionic_unconfigure_intr(struct ionic_adapter *adapter) { struct rte_pci_device *pci_dev = adapter->pci_dev; - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; rte_intr_disable(intr_handle); diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c index b5371568b5..48ee463e7d 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/ixgbe/ixgbe_ethdev.c @@ -1034,7 +1034,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused) { struct ixgbe_adapter *ad = eth_dev->data->dev_private; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); struct ixgbe_vfta *shadow_vfta = @@ -1529,7 +1529,7 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev) uint32_t tc, tcs; struct ixgbe_adapter *ad = eth_dev->data->dev_private; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); struct ixgbe_vfta *shadow_vfta = @@ -2548,7 +2548,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev) struct ixgbe_vf_info *vfinfo = *IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint32_t intr_vector = 0; int err; bool link_up = false, negotiate = 0; @@ -2603,11 +2603,10 @@ ixgbe_dev_start(struct rte_eth_dev *dev) return -1; } - if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) { - intr_handle->intr_vec = - rte_zmalloc("intr_vec", - dev->data->nb_rx_queues * sizeof(int), 0); - if (intr_handle->intr_vec == NULL) { + if (rte_intr_dp_is_en(intr_handle) && + !rte_intr_handle_vec_list_base(intr_handle)) { + if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec", + dev->data->nb_rx_queues)) { PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues" " intr_vec", dev->data->nb_rx_queues); return -ENOMEM; @@ -2843,7 +2842,7 @@ ixgbe_dev_stop(struct rte_eth_dev *dev) struct ixgbe_vf_info *vfinfo = *IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int vf; struct ixgbe_tm_conf *tm_conf = IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private); @@ -2894,10 +2893,8 @@ ixgbe_dev_stop(struct rte_eth_dev *dev) /* Clean datapath event and queue/vec mapping */ rte_intr_efd_disable(intr_handle); - if (intr_handle->intr_vec != NULL) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); /* reset hierarchy commit */ tm_conf->committed = false; @@ -2981,7 +2978,7 @@ ixgbe_dev_close(struct rte_eth_dev *dev) struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int retries = 0; int ret; @@ -4626,7 +4623,7 @@ ixgbe_dev_interrupt_delayed_handler(void *param) { struct rte_eth_dev *dev = (struct rte_eth_dev *)param; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct ixgbe_interrupt *intr = IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private); struct ixgbe_hw *hw = @@ -5307,7 +5304,7 @@ ixgbevf_dev_start(struct rte_eth_dev *dev) IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); uint32_t intr_vector = 0; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int err, mask = 0; @@ -5368,11 +5365,10 @@ ixgbevf_dev_start(struct rte_eth_dev *dev) return -1; } - if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) { - intr_handle->intr_vec = - rte_zmalloc("intr_vec", - dev->data->nb_rx_queues * sizeof(int), 0); - if (intr_handle->intr_vec == NULL) { + if (rte_intr_dp_is_en(intr_handle) && + !rte_intr_handle_vec_list_base(intr_handle)) { + if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec", + dev->data->nb_rx_queues)) { PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues" " intr_vec", dev->data->nb_rx_queues); return -ENOMEM; @@ -5411,7 +5407,7 @@ ixgbevf_dev_stop(struct rte_eth_dev *dev) struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct ixgbe_adapter *adapter = dev->data->dev_private; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; if (hw->adapter_stopped) return 0; @@ -5439,10 +5435,8 @@ ixgbevf_dev_stop(struct rte_eth_dev *dev) /* Clean datapath event and queue/vec mapping */ rte_intr_efd_disable(intr_handle); - if (intr_handle->intr_vec != NULL) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); adapter->rss_reta_updated = 0; @@ -5454,7 +5448,7 @@ ixgbevf_dev_close(struct rte_eth_dev *dev) { struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int ret; PMD_INIT_FUNC_TRACE(); @@ -5937,7 +5931,7 @@ static int ixgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct ixgbe_interrupt *intr = IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private); struct ixgbe_hw *hw = @@ -5963,7 +5957,7 @@ ixgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint32_t vec = IXGBE_MISC_VEC_ID; if (rte_intr_allow_others(intr_handle)) @@ -5979,7 +5973,7 @@ static int ixgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint32_t mask; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -6106,7 +6100,7 @@ static void ixgbevf_configure_msix(struct rte_eth_dev *dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); uint32_t q_idx; @@ -6133,8 +6127,10 @@ ixgbevf_configure_msix(struct rte_eth_dev *dev) * as IXGBE_VF_MAXMSIVECOTR = 1 */ ixgbevf_set_ivar_map(hw, 0, q_idx, vector_idx); - intr_handle->intr_vec[q_idx] = vector_idx; - if (vector_idx < base + intr_handle->nb_efd - 1) + rte_intr_handle_vec_list_index_set(intr_handle, q_idx, + vector_idx); + if (vector_idx < base + rte_intr_handle_nb_efd_get(intr_handle) + - 1) vector_idx++; } @@ -6155,7 +6151,7 @@ static void ixgbe_configure_msix(struct rte_eth_dev *dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); uint32_t queue_id, base = IXGBE_MISC_VEC_ID; @@ -6199,8 +6195,10 @@ ixgbe_configure_msix(struct rte_eth_dev *dev) queue_id++) { /* by default, 1:1 mapping */ ixgbe_set_ivar_map(hw, 0, queue_id, vec); - intr_handle->intr_vec[queue_id] = vec; - if (vec < base + intr_handle->nb_efd - 1) + rte_intr_handle_vec_list_index_set(intr_handle, + queue_id, vec); + if (vec < base + rte_intr_handle_nb_efd_get(intr_handle) + - 1) vec++; } diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c index f58ff4c0cb..1f558a6997 100644 --- a/drivers/net/memif/memif_socket.c +++ b/drivers/net/memif/memif_socket.c @@ -65,7 +65,8 @@ memif_msg_send_from_queue(struct memif_control_channel *cc) if (e == NULL) return 0; - size = memif_msg_send(cc->intr_handle.fd, &e->msg, e->fd); + size = memif_msg_send(rte_intr_handle_fd_get(cc->intr_handle), &e->msg, + e->fd); if (size != sizeof(memif_msg_t)) { MIF_LOG(ERR, "sendmsg fail: %s.", strerror(errno)); ret = -1; @@ -317,7 +318,9 @@ memif_msg_receive_add_ring(struct rte_eth_dev *dev, memif_msg_t *msg, int fd) mq = (ar->flags & MEMIF_MSG_ADD_RING_FLAG_C2S) ? dev->data->rx_queues[ar->index] : dev->data->tx_queues[ar->index]; - mq->intr_handle.fd = fd; + if (rte_intr_handle_fd_set(mq->intr_handle, fd)) + return -1; + mq->log2_ring_size = ar->log2_ring_size; mq->region = ar->region; mq->ring_offset = ar->offset; @@ -453,7 +456,7 @@ memif_msg_enq_add_ring(struct rte_eth_dev *dev, uint8_t idx, dev->data->rx_queues[idx]; e->msg.type = MEMIF_MSG_TYPE_ADD_RING; - e->fd = mq->intr_handle.fd; + e->fd = rte_intr_handle_fd_get(mq->intr_handle); ar->index = idx; ar->offset = mq->ring_offset; ar->region = mq->region; @@ -505,12 +508,13 @@ memif_intr_unregister_handler(struct rte_intr_handle *intr_handle, void *arg) struct memif_control_channel *cc = arg; /* close control channel fd */ - close(intr_handle->fd); + close(rte_intr_handle_fd_get(intr_handle)); /* clear message queue */ while ((elt = TAILQ_FIRST(&cc->msg_queue)) != NULL) { TAILQ_REMOVE(&cc->msg_queue, elt, next); rte_free(elt); } + rte_intr_handle_instance_free(cc->intr_handle); /* free control channel */ rte_free(cc); } @@ -548,8 +552,8 @@ memif_disconnect(struct rte_eth_dev *dev) "Unexpected message(s) in message queue."); } - ih = &pmd->cc->intr_handle; - if (ih->fd > 0) { + ih = pmd->cc->intr_handle; + if (rte_intr_handle_fd_get(ih) > 0) { ret = rte_intr_callback_unregister(ih, memif_intr_handler, pmd->cc); @@ -563,7 +567,8 @@ memif_disconnect(struct rte_eth_dev *dev) pmd->cc, memif_intr_unregister_handler); } else if (ret > 0) { - close(ih->fd); + close(rte_intr_handle_fd_get(ih)); + rte_intr_handle_instance_free(ih); rte_free(pmd->cc); } pmd->cc = NULL; @@ -587,9 +592,10 @@ memif_disconnect(struct rte_eth_dev *dev) else continue; } - if (mq->intr_handle.fd > 0) { - close(mq->intr_handle.fd); - mq->intr_handle.fd = -1; + + if (rte_intr_handle_fd_get(mq->intr_handle) > 0) { + close(rte_intr_handle_fd_get(mq->intr_handle)); + rte_intr_handle_fd_set(mq->intr_handle, -1); } } for (i = 0; i < pmd->cfg.num_s2c_rings; i++) { @@ -604,9 +610,10 @@ memif_disconnect(struct rte_eth_dev *dev) else continue; } - if (mq->intr_handle.fd > 0) { - close(mq->intr_handle.fd); - mq->intr_handle.fd = -1; + + if (rte_intr_handle_fd_get(mq->intr_handle) > 0) { + close(rte_intr_handle_fd_get(mq->intr_handle)); + rte_intr_handle_fd_set(mq->intr_handle, -1); } } @@ -644,7 +651,7 @@ memif_msg_receive(struct memif_control_channel *cc) mh.msg_control = ctl; mh.msg_controllen = sizeof(ctl); - size = recvmsg(cc->intr_handle.fd, &mh, 0); + size = recvmsg(rte_intr_handle_fd_get(cc->intr_handle), &mh, 0); if (size != sizeof(memif_msg_t)) { MIF_LOG(DEBUG, "Invalid message size = %zd", size); if (size > 0) @@ -774,7 +781,7 @@ memif_intr_handler(void *arg) /* if driver failed to assign device */ if (cc->dev == NULL) { memif_msg_send_from_queue(cc); - ret = rte_intr_callback_unregister_pending(&cc->intr_handle, + ret = rte_intr_callback_unregister_pending(cc->intr_handle, memif_intr_handler, cc, memif_intr_unregister_handler); @@ -812,12 +819,12 @@ memif_listener_handler(void *arg) int ret; addr_len = sizeof(client); - sockfd = accept(socket->intr_handle.fd, (struct sockaddr *)&client, - (socklen_t *)&addr_len); + sockfd = accept(rte_intr_handle_fd_get(socket->intr_handle), + (struct sockaddr *)&client, (socklen_t *)&addr_len); if (sockfd < 0) { MIF_LOG(ERR, "Failed to accept connection request on socket fd %d", - socket->intr_handle.fd); + rte_intr_handle_fd_get(socket->intr_handle)); return; } @@ -829,13 +836,27 @@ memif_listener_handler(void *arg) goto error; } - cc->intr_handle.fd = sockfd; - cc->intr_handle.type = RTE_INTR_HANDLE_EXT; + /* Allocate interrupt instance */ + cc->intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + true); + if (!cc->intr_handle) { + MIF_LOG(ERR, "Failed to allocate intr handle"); + goto error; + } + + if (rte_intr_handle_fd_set(cc->intr_handle, sockfd)) + goto error; + + if (rte_intr_handle_type_set(cc->intr_handle, RTE_INTR_HANDLE_EXT)) + goto error; + cc->socket = socket; cc->dev = NULL; TAILQ_INIT(&cc->msg_queue); - ret = rte_intr_callback_register(&cc->intr_handle, memif_intr_handler, cc); + ret = rte_intr_callback_register(cc->intr_handle, memif_intr_handler, + cc); if (ret < 0) { MIF_LOG(ERR, "Failed to register control channel callback."); goto error; @@ -857,8 +878,11 @@ memif_listener_handler(void *arg) close(sockfd); sockfd = -1; } - if (cc != NULL) + if (cc != NULL) { + if (cc->intr_handle) + rte_intr_handle_instance_free(cc->intr_handle); rte_free(cc); + } } static struct memif_socket * @@ -914,9 +938,23 @@ memif_socket_create(char *key, uint8_t listener, bool is_abstract) MIF_LOG(DEBUG, "Memif listener socket %s created.", sock->filename); - sock->intr_handle.fd = sockfd; - sock->intr_handle.type = RTE_INTR_HANDLE_EXT; - ret = rte_intr_callback_register(&sock->intr_handle, + /* Allocate interrupt instance */ + sock->intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + true); + if (!sock->intr_handle) { + MIF_LOG(ERR, "Failed to allocate intr handle"); + goto error; + } + + if (rte_intr_handle_fd_set(sock->intr_handle, sockfd)) + goto error; + + if (rte_intr_handle_type_set(sock->intr_handle, + RTE_INTR_HANDLE_EXT)) + goto error; + + ret = rte_intr_callback_register(sock->intr_handle, memif_listener_handler, sock); if (ret < 0) { MIF_LOG(ERR, "Failed to register interrupt " @@ -929,8 +967,10 @@ memif_socket_create(char *key, uint8_t listener, bool is_abstract) error: MIF_LOG(ERR, "Failed to setup socket %s: %s", key, strerror(errno)); - if (sock != NULL) + if (sock != NULL) { + rte_intr_handle_instance_free(sock->intr_handle); rte_free(sock); + } if (sockfd >= 0) close(sockfd); return NULL; @@ -1046,6 +1086,8 @@ memif_socket_remove_device(struct rte_eth_dev *dev) MIF_LOG(ERR, "Failed to remove socket file: %s", socket->filename); } + if (pmd->role != MEMIF_ROLE_CLIENT) + rte_intr_handle_instance_free(socket->intr_handle); rte_free(socket); } } @@ -1108,13 +1150,26 @@ memif_connect_client(struct rte_eth_dev *dev) goto error; } - pmd->cc->intr_handle.fd = sockfd; - pmd->cc->intr_handle.type = RTE_INTR_HANDLE_EXT; + /* Allocate interrupt instance */ + pmd->cc->intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + true); + if (!pmd->cc->intr_handle) { + MIF_LOG(ERR, "Failed to allocate intr handle"); + goto error; + } + + if (rte_intr_handle_fd_set(pmd->cc->intr_handle, sockfd)) + goto error; + + if (rte_intr_handle_type_set(pmd->cc->intr_handle, RTE_INTR_HANDLE_EXT)) + goto error; + pmd->cc->socket = NULL; pmd->cc->dev = dev; TAILQ_INIT(&pmd->cc->msg_queue); - ret = rte_intr_callback_register(&pmd->cc->intr_handle, + ret = rte_intr_callback_register(pmd->cc->intr_handle, memif_intr_handler, pmd->cc); if (ret < 0) { MIF_LOG(ERR, "Failed to register interrupt callback for control fd"); @@ -1129,6 +1184,7 @@ memif_connect_client(struct rte_eth_dev *dev) sockfd = -1; } if (pmd->cc != NULL) { + rte_intr_handle_instance_free(pmd->cc->intr_handle); rte_free(pmd->cc); pmd->cc = NULL; } diff --git a/drivers/net/memif/memif_socket.h b/drivers/net/memif/memif_socket.h index b9b8a15178..b0decbb0a2 100644 --- a/drivers/net/memif/memif_socket.h +++ b/drivers/net/memif/memif_socket.h @@ -85,7 +85,7 @@ struct memif_socket_dev_list_elt { (sizeof(struct sockaddr_un) - offsetof(struct sockaddr_un, sun_path)) struct memif_socket { - struct rte_intr_handle intr_handle; /**< interrupt handle */ + struct rte_intr_handle *intr_handle; /**< interrupt handle */ char filename[MEMIF_SOCKET_UN_SIZE]; /**< socket filename */ TAILQ_HEAD(, memif_socket_dev_list_elt) dev_queue; @@ -101,7 +101,7 @@ struct memif_msg_queue_elt { }; struct memif_control_channel { - struct rte_intr_handle intr_handle; /**< interrupt handle */ + struct rte_intr_handle *intr_handle; /**< interrupt handle */ TAILQ_HEAD(, memif_msg_queue_elt) msg_queue; /**< control message queue */ struct memif_socket *socket; /**< pointer to socket */ struct rte_eth_dev *dev; /**< pointer to device */ diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c index de6becd45e..38fd93d2a7 100644 --- a/drivers/net/memif/rte_eth_memif.c +++ b/drivers/net/memif/rte_eth_memif.c @@ -325,7 +325,8 @@ eth_memif_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) /* consume interrupt */ if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) - size = read(mq->intr_handle.fd, &b, sizeof(b)); + size = read(rte_intr_handle_fd_get(mq->intr_handle), &b, + sizeof(b)); ring_size = 1 << mq->log2_ring_size; mask = ring_size - 1; @@ -461,7 +462,8 @@ eth_memif_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) { uint64_t b; ssize_t size __rte_unused; - size = read(mq->intr_handle.fd, &b, sizeof(b)); + size = read(rte_intr_handle_fd_get(mq->intr_handle), &b, + sizeof(b)); } ring_size = 1 << mq->log2_ring_size; @@ -678,7 +680,8 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) { a = 1; - size = write(mq->intr_handle.fd, &a, sizeof(a)); + size = write(rte_intr_handle_fd_get(mq->intr_handle), &a, + sizeof(a)); if (unlikely(size < 0)) { MIF_LOG(WARNING, "Failed to send interrupt. %s", strerror(errno)); @@ -829,7 +832,8 @@ eth_memif_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) /* Send interrupt, if enabled. */ if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) { uint64_t a = 1; - ssize_t size = write(mq->intr_handle.fd, &a, sizeof(a)); + ssize_t size = write(rte_intr_handle_fd_get(mq->intr_handle), + &a, sizeof(a)); if (unlikely(size < 0)) { MIF_LOG(WARNING, "Failed to send interrupt. %s", strerror(errno)); @@ -1089,8 +1093,11 @@ memif_init_queues(struct rte_eth_dev *dev) mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_C2S, i); mq->last_head = 0; mq->last_tail = 0; - mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK); - if (mq->intr_handle.fd < 0) { + if (rte_intr_handle_fd_set(mq->intr_handle, + eventfd(0, EFD_NONBLOCK))) + return -rte_errno; + + if (rte_intr_handle_fd_get(mq->intr_handle) < 0) { MIF_LOG(WARNING, "Failed to create eventfd for tx queue %d: %s.", i, strerror(errno)); @@ -1112,8 +1119,11 @@ memif_init_queues(struct rte_eth_dev *dev) mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_S2C, i); mq->last_head = 0; mq->last_tail = 0; - mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK); - if (mq->intr_handle.fd < 0) { + if (rte_intr_handle_fd_set(mq->intr_handle, + eventfd(0, EFD_NONBLOCK))) + return -rte_errno; + + if (rte_intr_handle_fd_get(mq->intr_handle) < 0) { MIF_LOG(WARNING, "Failed to create eventfd for rx queue %d: %s.", i, strerror(errno)); @@ -1307,12 +1317,26 @@ memif_tx_queue_setup(struct rte_eth_dev *dev, return -ENOMEM; } + /* Allocate interrupt instance */ + mq->intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + true); + if (!mq->intr_handle) { + MIF_LOG(ERR, "Failed to allocate intr handle"); + return -ENOMEM; + } + mq->type = (pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_C2S : MEMIF_RING_S2C; mq->n_pkts = 0; mq->n_bytes = 0; - mq->intr_handle.fd = -1; - mq->intr_handle.type = RTE_INTR_HANDLE_EXT; + + if (rte_intr_handle_fd_set(mq->intr_handle, -1)) + return -rte_errno; + + if (rte_intr_handle_type_set(mq->intr_handle, RTE_INTR_HANDLE_EXT)) + return -rte_errno; + mq->in_port = dev->data->port_id; dev->data->tx_queues[qid] = mq; @@ -1336,11 +1360,25 @@ memif_rx_queue_setup(struct rte_eth_dev *dev, return -ENOMEM; } + /* Allocate interrupt instance */ + mq->intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + true); + if (!mq->intr_handle) { + MIF_LOG(ERR, "Failed to allocate intr handle"); + return -ENOMEM; + } + mq->type = (pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_S2C : MEMIF_RING_C2S; mq->n_pkts = 0; mq->n_bytes = 0; - mq->intr_handle.fd = -1; - mq->intr_handle.type = RTE_INTR_HANDLE_EXT; + + if (rte_intr_handle_fd_set(mq->intr_handle, -1)) + return -rte_errno; + + if (rte_intr_handle_type_set(mq->intr_handle, RTE_INTR_HANDLE_EXT)) + return -rte_errno; + mq->mempool = mb_pool; mq->in_port = dev->data->port_id; dev->data->rx_queues[qid] = mq; @@ -1356,6 +1394,7 @@ memif_queue_release(void *queue) if (!mq) return; + rte_intr_handle_instance_free(mq->intr_handle); rte_free(mq); } diff --git a/drivers/net/memif/rte_eth_memif.h b/drivers/net/memif/rte_eth_memif.h index 2038bda742..a5ee23d42e 100644 --- a/drivers/net/memif/rte_eth_memif.h +++ b/drivers/net/memif/rte_eth_memif.h @@ -68,7 +68,7 @@ struct memif_queue { uint64_t n_pkts; /**< number of rx/tx packets */ uint64_t n_bytes; /**< number of rx/tx bytes */ - struct rte_intr_handle intr_handle; /**< interrupt handle */ + struct rte_intr_handle *intr_handle; /**< interrupt handle */ memif_log2_ring_size_t log2_ring_size; /**< log2 of ring size */ }; diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c index c522157a0a..8d32694613 100644 --- a/drivers/net/mlx4/mlx4.c +++ b/drivers/net/mlx4/mlx4.c @@ -1045,9 +1045,20 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) rte_eth_copy_pci_info(eth_dev, pci_dev); eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS; /* Initialize local interrupt handle for current port. */ - memset(&priv->intr_handle, 0, sizeof(struct rte_intr_handle)); - priv->intr_handle.fd = -1; - priv->intr_handle.type = RTE_INTR_HANDLE_EXT; + priv->intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + true); + if (!priv->intr_handle) { + RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n"); + goto port_error; + } + + if (rte_intr_handle_fd_set(priv->intr_handle, -1)) + goto port_error; + + if (rte_intr_handle_type_set(priv->intr_handle, + RTE_INTR_HANDLE_EXT)) + goto port_error; /* * Override ethdev interrupt handle pointer with private * handle instead of that of the parent PCI device used by @@ -1060,7 +1071,7 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) * besides setting up eth_dev->intr_handle, the rest is * handled by rte_intr_rx_ctl(). */ - eth_dev->intr_handle = &priv->intr_handle; + eth_dev->intr_handle = priv->intr_handle; priv->dev_data = eth_dev->data; eth_dev->dev_ops = &mlx4_dev_ops; #ifdef HAVE_IBV_MLX4_BUF_ALLOCATORS @@ -1105,6 +1116,7 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) prev_dev = eth_dev; continue; port_error: + rte_intr_handle_instance_free(priv->intr_handle); rte_free(priv); if (eth_dev != NULL) eth_dev->data->dev_private = NULL; diff --git a/drivers/net/mlx4/mlx4.h b/drivers/net/mlx4/mlx4.h index e07b1d2386..2d0c512f79 100644 --- a/drivers/net/mlx4/mlx4.h +++ b/drivers/net/mlx4/mlx4.h @@ -176,7 +176,7 @@ struct mlx4_priv { uint32_t tso_max_payload_sz; /**< Max supported TSO payload size. */ uint32_t hw_rss_max_qps; /**< Max Rx Queues supported by RSS. */ uint64_t hw_rss_sup; /**< Supported RSS hash fields (Verbs format). */ - struct rte_intr_handle intr_handle; /**< Port interrupt handle. */ + struct rte_intr_handle *intr_handle; /**< Port interrupt handle. */ struct mlx4_drop *drop; /**< Shared resources for drop flow rules. */ struct { uint32_t dev_gen; /* Generation number to flush local caches. */ diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c index d56009c418..1e28b8e4b2 100644 --- a/drivers/net/mlx4/mlx4_intr.c +++ b/drivers/net/mlx4/mlx4_intr.c @@ -43,12 +43,13 @@ static int mlx4_link_status_check(struct mlx4_priv *priv); static void mlx4_rx_intr_vec_disable(struct mlx4_priv *priv) { - struct rte_intr_handle *intr_handle = &priv->intr_handle; + struct rte_intr_handle *intr_handle = priv->intr_handle; rte_intr_free_epoll_fd(intr_handle); - free(intr_handle->intr_vec); - intr_handle->nb_efd = 0; - intr_handle->intr_vec = NULL; + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); + + rte_intr_handle_nb_efd_set(intr_handle, 0); } /** @@ -67,11 +68,10 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv) unsigned int rxqs_n = ETH_DEV(priv)->data->nb_rx_queues; unsigned int n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID); unsigned int count = 0; - struct rte_intr_handle *intr_handle = &priv->intr_handle; + struct rte_intr_handle *intr_handle = priv->intr_handle; mlx4_rx_intr_vec_disable(priv); - intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0])); - if (intr_handle->intr_vec == NULL) { + if (rte_intr_handle_vec_list_alloc(intr_handle, NULL, n)) { rte_errno = ENOMEM; ERROR("failed to allocate memory for interrupt vector," " Rx interrupts will not be supported"); @@ -83,9 +83,9 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv) /* Skip queues that cannot request interrupts. */ if (!rxq || !rxq->channel) { /* Use invalid intr_vec[] index to disable entry. */ - intr_handle->intr_vec[i] = - RTE_INTR_VEC_RXTX_OFFSET + - RTE_MAX_RXTX_INTR_VEC_ID; + if (rte_intr_handle_vec_list_index_set(intr_handle, i, + RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID)) + return -rte_errno; continue; } if (count >= RTE_MAX_RXTX_INTR_VEC_ID) { @@ -96,14 +96,22 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv) mlx4_rx_intr_vec_disable(priv); return -rte_errno; } - intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count; - intr_handle->efds[count] = rxq->channel->fd; + + if (rte_intr_handle_vec_list_index_set(intr_handle, i, + RTE_INTR_VEC_RXTX_OFFSET + count)) + return -rte_errno; + + if (rte_intr_handle_efds_index_set(intr_handle, i, + rxq->channel->fd)) + return -rte_errno; + count++; } if (!count) mlx4_rx_intr_vec_disable(priv); else - intr_handle->nb_efd = count; + if (rte_intr_handle_nb_efd_set(intr_handle, count)) + return -rte_errno; return 0; } @@ -254,12 +262,13 @@ mlx4_intr_uninstall(struct mlx4_priv *priv) { int err = rte_errno; /* Make sure rte_errno remains unchanged. */ - if (priv->intr_handle.fd != -1) { - rte_intr_callback_unregister(&priv->intr_handle, + if (rte_intr_handle_fd_get(priv->intr_handle) != -1) { + rte_intr_callback_unregister(priv->intr_handle, (void (*)(void *)) mlx4_interrupt_handler, priv); - priv->intr_handle.fd = -1; + if (rte_intr_handle_fd_set(priv->intr_handle, -1)) + return -rte_errno; } rte_eal_alarm_cancel((void (*)(void *))mlx4_link_status_alarm, priv); priv->intr_alarm = 0; @@ -286,8 +295,11 @@ mlx4_intr_install(struct mlx4_priv *priv) mlx4_intr_uninstall(priv); if (intr_conf->lsc | intr_conf->rmv) { - priv->intr_handle.fd = priv->ctx->async_fd; - rc = rte_intr_callback_register(&priv->intr_handle, + if (rte_intr_handle_fd_set(priv->intr_handle, + priv->ctx->async_fd)) + return -rte_errno; + + rc = rte_intr_callback_register(priv->intr_handle, (void (*)(void *)) mlx4_interrupt_handler, priv); diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 5f8766aa48..117a3ded16 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -2589,9 +2589,8 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev, */ if (list[i].info.representor) { struct rte_intr_handle *intr_handle; - intr_handle = mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO, - sizeof(*intr_handle), 0, - SOCKET_ID_ANY); + intr_handle = rte_intr_handle_instance_alloc + (RTE_INTR_HANDLE_DEFAULT_SIZE, true); if (!intr_handle) { DRV_LOG(ERR, "port %u failed to allocate memory for interrupt handler " @@ -2745,7 +2744,7 @@ mlx5_os_auxiliary_probe(struct rte_device *dev) if (eth_dev == NULL) return -rte_errno; /* Post create. */ - eth_dev->intr_handle = &adev->intr_handle; + eth_dev->intr_handle = adev->intr_handle; if (rte_eal_process_type() == RTE_PROC_PRIMARY) { eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC; eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_RMV; @@ -2929,7 +2928,16 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh) int ret; int flags; - sh->intr_handle.fd = -1; + sh->intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + true); + if (!sh->intr_handle) { + DRV_LOG(ERR, "Fail to allocate intr_handle"); + rte_errno = ENOMEM; + return; + } + rte_intr_handle_fd_set(sh->intr_handle, -1); + flags = fcntl(((struct ibv_context *)sh->ctx)->async_fd, F_GETFL); ret = fcntl(((struct ibv_context *)sh->ctx)->async_fd, F_SETFL, flags | O_NONBLOCK); @@ -2937,17 +2945,26 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh) DRV_LOG(INFO, "failed to change file descriptor async event" " queue"); } else { - sh->intr_handle.fd = ((struct ibv_context *)sh->ctx)->async_fd; - sh->intr_handle.type = RTE_INTR_HANDLE_EXT; - if (rte_intr_callback_register(&sh->intr_handle, + rte_intr_handle_fd_set(sh->intr_handle, + ((struct ibv_context *)sh->ctx)->async_fd); + rte_intr_handle_type_set(sh->intr_handle, RTE_INTR_HANDLE_EXT); + if (rte_intr_callback_register(sh->intr_handle, mlx5_dev_interrupt_handler, sh)) { DRV_LOG(INFO, "Fail to install the shared interrupt."); - sh->intr_handle.fd = -1; + rte_intr_handle_fd_set(sh->intr_handle, -1); } } if (sh->devx) { #ifdef HAVE_IBV_DEVX_ASYNC - sh->intr_handle_devx.fd = -1; + sh->intr_handle_devx = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + true); + if (!sh->intr_handle_devx) { + DRV_LOG(ERR, "Fail to allocate intr_handle"); + rte_errno = ENOMEM; + return; + } + rte_intr_handle_fd_set(sh->intr_handle_devx, -1); sh->devx_comp = (void *)mlx5_glue->devx_create_cmd_comp(sh->ctx); struct mlx5dv_devx_cmd_comp *devx_comp = sh->devx_comp; @@ -2962,13 +2979,14 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh) " devx comp"); return; } - sh->intr_handle_devx.fd = devx_comp->fd; - sh->intr_handle_devx.type = RTE_INTR_HANDLE_EXT; - if (rte_intr_callback_register(&sh->intr_handle_devx, + rte_intr_handle_fd_set(sh->intr_handle_devx, devx_comp->fd); + rte_intr_handle_type_set(sh->intr_handle_devx, + RTE_INTR_HANDLE_EXT); + if (rte_intr_callback_register(sh->intr_handle_devx, mlx5_dev_interrupt_handler_devx, sh)) { DRV_LOG(INFO, "Fail to install the devx shared" " interrupt."); - sh->intr_handle_devx.fd = -1; + rte_intr_handle_fd_set(sh->intr_handle_devx, -1); } #endif /* HAVE_IBV_DEVX_ASYNC */ } @@ -2985,13 +3003,15 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh) void mlx5_os_dev_shared_handler_uninstall(struct mlx5_dev_ctx_shared *sh) { - if (sh->intr_handle.fd >= 0) - mlx5_intr_callback_unregister(&sh->intr_handle, + if (rte_intr_handle_fd_get(sh->intr_handle) >= 0) + mlx5_intr_callback_unregister(sh->intr_handle, mlx5_dev_interrupt_handler, sh); + rte_intr_handle_instance_free(sh->intr_handle); #ifdef HAVE_IBV_DEVX_ASYNC - if (sh->intr_handle_devx.fd >= 0) - rte_intr_callback_unregister(&sh->intr_handle_devx, + if (rte_intr_handle_fd_get(sh->intr_handle_devx) >= 0) + rte_intr_callback_unregister(sh->intr_handle_devx, mlx5_dev_interrupt_handler_devx, sh); + rte_intr_handle_instance_free(sh->intr_handle_devx); if (sh->devx_comp) mlx5_glue->devx_destroy_cmd_comp(sh->devx_comp); #endif diff --git a/drivers/net/mlx5/linux/mlx5_socket.c b/drivers/net/mlx5/linux/mlx5_socket.c index 6356b66dc4..9007333c61 100644 --- a/drivers/net/mlx5/linux/mlx5_socket.c +++ b/drivers/net/mlx5/linux/mlx5_socket.c @@ -23,7 +23,7 @@ #define MLX5_SOCKET_PATH "/var/tmp/dpdk_net_mlx5_%d" int server_socket; /* Unix socket for primary process. */ -struct rte_intr_handle server_intr_handle; /* Interrupt handler. */ +struct rte_intr_handle *server_intr_handle; /* Interrupt handler. */ /** * Handle server pmd socket interrupts. @@ -145,9 +145,20 @@ static int mlx5_pmd_interrupt_handler_install(void) { MLX5_ASSERT(server_socket); - server_intr_handle.fd = server_socket; - server_intr_handle.type = RTE_INTR_HANDLE_EXT; - return rte_intr_callback_register(&server_intr_handle, + server_intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + false); + if (!server_intr_handle) { + DRV_LOG(ERR, "Fail to allocate intr_handle"); + return -ENOMEM; + } + if (rte_intr_handle_fd_set(server_intr_handle, server_socket)) + return -1; + + if (rte_intr_handle_type_set(server_intr_handle, RTE_INTR_HANDLE_EXT)) + return -1; + + return rte_intr_callback_register(server_intr_handle, mlx5_pmd_socket_handle, NULL); } @@ -158,12 +169,13 @@ static void mlx5_pmd_interrupt_handler_uninstall(void) { if (server_socket) { - mlx5_intr_callback_unregister(&server_intr_handle, + mlx5_intr_callback_unregister(server_intr_handle, mlx5_pmd_socket_handle, NULL); } - server_intr_handle.fd = 0; - server_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN; + rte_intr_handle_fd_set(server_intr_handle, 0); + rte_intr_handle_type_set(server_intr_handle, RTE_INTR_HANDLE_UNKNOWN); + rte_intr_handle_instance_free(server_intr_handle); } /** diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index e02714e231..b4666fd379 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1016,7 +1016,7 @@ struct mlx5_dev_txpp { uint32_t tick; /* Completion tick duration in nanoseconds. */ uint32_t test; /* Packet pacing test mode. */ int32_t skew; /* Scheduling skew. */ - struct rte_intr_handle intr_handle; /* Periodic interrupt. */ + struct rte_intr_handle *intr_handle; /* Periodic interrupt. */ void *echan; /* Event Channel. */ struct mlx5_txpp_wq clock_queue; /* Clock Queue. */ struct mlx5_txpp_wq rearm_queue; /* Clock Queue. */ @@ -1184,8 +1184,8 @@ struct mlx5_dev_ctx_shared { /* Memory Pool for mlx5 flow resources. */ struct mlx5_l3t_tbl *cnt_id_tbl; /* Shared counter lookup table. */ /* Shared interrupt handler section. */ - struct rte_intr_handle intr_handle; /* Interrupt handler for device. */ - struct rte_intr_handle intr_handle_devx; /* DEVX interrupt handler. */ + struct rte_intr_handle *intr_handle; /* Interrupt handler for device. */ + struct rte_intr_handle *intr_handle_devx; /* DEVX interrupt handler. */ void *devx_comp; /* DEVX async comp obj. */ struct mlx5_devx_obj *tis; /* TIS object. */ struct mlx5_devx_obj *td; /* Transport domain. */ diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index abd8ce7989..75bcb82bf9 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -837,10 +837,7 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) if (!dev->data->dev_conf.intr_conf.rxq) return 0; mlx5_rx_intr_vec_disable(dev); - intr_handle->intr_vec = mlx5_malloc(0, - n * sizeof(intr_handle->intr_vec[0]), - 0, SOCKET_ID_ANY); - if (intr_handle->intr_vec == NULL) { + if (rte_intr_handle_vec_list_alloc(intr_handle, NULL, n)) { DRV_LOG(ERR, "port %u failed to allocate memory for interrupt" " vector, Rx interrupts will not be supported", @@ -848,7 +845,10 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) rte_errno = ENOMEM; return -rte_errno; } - intr_handle->type = RTE_INTR_HANDLE_EXT; + + if (rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_EXT)) + return -rte_errno; + for (i = 0; i != n; ++i) { /* This rxq obj must not be released in this function. */ struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i); @@ -859,9 +859,9 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) if (!rxq_obj || (!rxq_obj->ibv_channel && !rxq_obj->devx_channel)) { /* Use invalid intr_vec[] index to disable entry. */ - intr_handle->intr_vec[i] = - RTE_INTR_VEC_RXTX_OFFSET + - RTE_MAX_RXTX_INTR_VEC_ID; + if (rte_intr_handle_vec_list_index_set(intr_handle, i, + RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID)) + return -rte_errno; /* Decrease the rxq_ctrl's refcnt */ if (rxq_ctrl) mlx5_rxq_release(dev, i); @@ -888,14 +888,20 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) mlx5_rx_intr_vec_disable(dev); return -rte_errno; } - intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count; - intr_handle->efds[count] = rxq_obj->fd; + + if (rte_intr_handle_vec_list_index_set(intr_handle, i, + RTE_INTR_VEC_RXTX_OFFSET + count)) + return -rte_errno; + if (rte_intr_handle_efds_index_set(intr_handle, count, + rxq_obj->fd)) + return -rte_errno; count++; } if (!count) mlx5_rx_intr_vec_disable(dev); else - intr_handle->nb_efd = count; + if (rte_intr_handle_nb_efd_set(intr_handle, count)) + return -rte_errno; return 0; } @@ -916,11 +922,11 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev) if (!dev->data->dev_conf.intr_conf.rxq) return; - if (!intr_handle->intr_vec) + if (!rte_intr_handle_vec_list_base(intr_handle)) goto free; for (i = 0; i != n; ++i) { - if (intr_handle->intr_vec[i] == RTE_INTR_VEC_RXTX_OFFSET + - RTE_MAX_RXTX_INTR_VEC_ID) + if (rte_intr_handle_vec_list_index_get(intr_handle, i) == + RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID) continue; /** * Need to access directly the queue to release the reference @@ -930,10 +936,11 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev) } free: rte_intr_free_epoll_fd(intr_handle); - if (intr_handle->intr_vec) - mlx5_free(intr_handle->intr_vec); - intr_handle->nb_efd = 0; - intr_handle->intr_vec = NULL; + + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); + + rte_intr_handle_nb_efd_set(intr_handle, 0); } /** diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 54173bfacb..d349e5df44 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -1129,7 +1129,7 @@ mlx5_dev_start(struct rte_eth_dev *dev) dev->rx_pkt_burst = mlx5_select_rx_function(dev); /* Enable datapath on secondary process. */ mlx5_mp_os_req_start_rxtx(dev); - if (priv->sh->intr_handle.fd >= 0) { + if (rte_intr_handle_fd_get(priv->sh->intr_handle) >= 0) { priv->sh->port[priv->dev_port - 1].ih_port_id = (uint32_t)dev->data->port_id; } else { @@ -1138,7 +1138,7 @@ mlx5_dev_start(struct rte_eth_dev *dev) dev->data->dev_conf.intr_conf.lsc = 0; dev->data->dev_conf.intr_conf.rmv = 0; } - if (priv->sh->intr_handle_devx.fd >= 0) + if (rte_intr_handle_fd_get(priv->sh->intr_handle_devx) >= 0) priv->sh->port[priv->dev_port - 1].devx_ih_port_id = (uint32_t)dev->data->port_id; return 0; diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c index 4f6da9f2d1..9567c4866d 100644 --- a/drivers/net/mlx5/mlx5_txpp.c +++ b/drivers/net/mlx5/mlx5_txpp.c @@ -756,11 +756,12 @@ mlx5_txpp_interrupt_handler(void *cb_arg) static void mlx5_txpp_stop_service(struct mlx5_dev_ctx_shared *sh) { - if (!sh->txpp.intr_handle.fd) + if (!rte_intr_handle_fd_get(sh->txpp.intr_handle)) return; - mlx5_intr_callback_unregister(&sh->txpp.intr_handle, + mlx5_intr_callback_unregister(sh->txpp.intr_handle, mlx5_txpp_interrupt_handler, sh); - sh->txpp.intr_handle.fd = 0; + rte_intr_handle_fd_set(sh->txpp.intr_handle, 0); + rte_intr_handle_instance_free(sh->txpp.intr_handle); } /* Attach interrupt handler and fires first request to Rearm Queue. */ @@ -784,13 +785,23 @@ mlx5_txpp_start_service(struct mlx5_dev_ctx_shared *sh) rte_errno = errno; return -rte_errno; } - memset(&sh->txpp.intr_handle, 0, sizeof(sh->txpp.intr_handle)); + sh->txpp.intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + true); + if (!sh->txpp.intr_handle) { + DRV_LOG(ERR, "Fail to allocate intr_handle"); + return -ENOMEM; + } fd = mlx5_os_get_devx_channel_fd(sh->txpp.echan); - sh->txpp.intr_handle.fd = fd; - sh->txpp.intr_handle.type = RTE_INTR_HANDLE_EXT; - if (rte_intr_callback_register(&sh->txpp.intr_handle, + if (rte_intr_handle_fd_set(sh->txpp.intr_handle, fd)) + return -rte_errno; + + if (rte_intr_handle_type_set(sh->txpp.intr_handle, RTE_INTR_HANDLE_EXT)) + return -rte_errno; + + if (rte_intr_callback_register(sh->txpp.intr_handle, mlx5_txpp_interrupt_handler, sh)) { - sh->txpp.intr_handle.fd = 0; + rte_intr_handle_fd_set(sh->txpp.intr_handle, 0); DRV_LOG(ERR, "Failed to register CQE interrupt %d.", rte_errno); return -rte_errno; } diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c index 9e2a405973..caf64ccfc2 100644 --- a/drivers/net/netvsc/hn_ethdev.c +++ b/drivers/net/netvsc/hn_ethdev.c @@ -133,9 +133,9 @@ eth_dev_vmbus_allocate(struct rte_vmbus_device *dev, size_t private_data_size) eth_dev->device = &dev->device; /* interrupt is simulated */ - dev->intr_handle.type = RTE_INTR_HANDLE_EXT; + rte_intr_handle_type_set(dev->intr_handle, RTE_INTR_HANDLE_EXT); eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC; - eth_dev->intr_handle = &dev->intr_handle; + eth_dev->intr_handle = dev->intr_handle; return eth_dev; } diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c index 1b4bc33593..ee083359b5 100644 --- a/drivers/net/nfp/nfp_common.c +++ b/drivers/net/nfp/nfp_common.c @@ -307,11 +307,9 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, struct nfp_net_hw *hw; int i; - if (!intr_handle->intr_vec) { - intr_handle->intr_vec = - rte_zmalloc("intr_vec", - dev->data->nb_rx_queues * sizeof(int), 0); - if (!intr_handle->intr_vec) { + if (!rte_intr_handle_vec_list_base(intr_handle)) { + if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec", + dev->data->nb_rx_queues)) { PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues" " intr_vec", dev->data->nb_rx_queues); return -ENOMEM; @@ -320,11 +318,12 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - if (intr_handle->type == RTE_INTR_HANDLE_UIO) { + if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) { PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with UIO"); /* UIO just supports one queue and no LSC*/ nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0); - intr_handle->intr_vec[0] = 0; + if (rte_intr_handle_vec_list_index_set(intr_handle, 0, 0)) + return -1; } else { PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with VFIO"); for (i = 0; i < dev->data->nb_rx_queues; i++) { @@ -333,9 +332,12 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, * efd interrupts */ nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1); - intr_handle->intr_vec[i] = i + 1; + if (rte_intr_handle_vec_list_index_set(intr_handle, i, + i + 1)) + return -1; PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i, - intr_handle->intr_vec[i]); + rte_intr_handle_vec_list_index_get(intr_handle, + i)); } } @@ -808,7 +810,8 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); - if (pci_dev->intr_handle.type != RTE_INTR_HANDLE_UIO) + if (rte_intr_handle_type_get(pci_dev->intr_handle) != + RTE_INTR_HANDLE_UIO) base = 1; /* Make sure all updates are written before un-masking */ @@ -828,7 +831,8 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); - if (pci_dev->intr_handle.type != RTE_INTR_HANDLE_UIO) + if (rte_intr_handle_type_get(pci_dev->intr_handle) != + RTE_INTR_HANDLE_UIO) base = 1; /* Make sure all updates are written before un-masking */ @@ -878,7 +882,7 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev) if (hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) { /* If MSI-X auto-masking is used, clear the entry */ rte_wmb(); - rte_intr_ack(&pci_dev->intr_handle); + rte_intr_ack(pci_dev->intr_handle); } else { /* Make sure all updates are written before un-masking */ rte_wmb(); diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c index 534a38c14f..f9086f806f 100644 --- a/drivers/net/nfp/nfp_ethdev.c +++ b/drivers/net/nfp/nfp_ethdev.c @@ -81,7 +81,7 @@ static int nfp_net_start(struct rte_eth_dev *dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint32_t new_ctrl, update = 0; struct nfp_net_hw *hw; struct nfp_pf_dev *pf_dev; @@ -108,12 +108,13 @@ nfp_net_start(struct rte_eth_dev *dev) "with NFP multiport PF"); return -EINVAL; } - if (intr_handle->type == RTE_INTR_HANDLE_UIO) { + if (rte_intr_handle_type_get(intr_handle) == + RTE_INTR_HANDLE_UIO) { /* * Better not to share LSC with RX interrupts. * Unregistering LSC interrupt handler */ - rte_intr_callback_unregister(&pci_dev->intr_handle, + rte_intr_callback_unregister(pci_dev->intr_handle, nfp_net_dev_interrupt_handler, (void *)dev); if (dev->data->nb_rx_queues > 1) { @@ -328,10 +329,10 @@ nfp_net_close(struct rte_eth_dev *dev) nfp_cpp_free(pf_dev->cpp); rte_free(pf_dev); - rte_intr_disable(&pci_dev->intr_handle); + rte_intr_disable(pci_dev->intr_handle); /* unregister callback func from eal lib */ - rte_intr_callback_unregister(&pci_dev->intr_handle, + rte_intr_callback_unregister(pci_dev->intr_handle, nfp_net_dev_interrupt_handler, (void *)dev); @@ -574,7 +575,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) if (rte_eal_process_type() == RTE_PROC_PRIMARY) { /* Registering LSC interrupt handler */ - rte_intr_callback_register(&pci_dev->intr_handle, + rte_intr_callback_register(pci_dev->intr_handle, nfp_net_dev_interrupt_handler, (void *)eth_dev); /* Telling the firmware about the LSC interrupt entry */ diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index b697b55865..e167d364fc 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -49,7 +49,7 @@ static int nfp_netvf_start(struct rte_eth_dev *dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint32_t new_ctrl, update = 0; struct nfp_net_hw *hw; struct rte_eth_conf *dev_conf; @@ -69,12 +69,13 @@ nfp_netvf_start(struct rte_eth_dev *dev) /* check and configure queue intr-vector mapping */ if (dev->data->dev_conf.intr_conf.rxq != 0) { - if (intr_handle->type == RTE_INTR_HANDLE_UIO) { + if (rte_intr_handle_type_get(intr_handle) == + RTE_INTR_HANDLE_UIO) { /* * Better not to share LSC with RX interrupts. * Unregistering LSC interrupt handler */ - rte_intr_callback_unregister(&pci_dev->intr_handle, + rte_intr_callback_unregister(pci_dev->intr_handle, nfp_net_dev_interrupt_handler, (void *)dev); if (dev->data->nb_rx_queues > 1) { @@ -223,10 +224,10 @@ nfp_netvf_close(struct rte_eth_dev *dev) nfp_net_reset_rx_queue(this_rx_q); } - rte_intr_disable(&pci_dev->intr_handle); + rte_intr_disable(pci_dev->intr_handle); /* unregister callback func from eal lib */ - rte_intr_callback_unregister(&pci_dev->intr_handle, + rte_intr_callback_unregister(pci_dev->intr_handle, nfp_net_dev_interrupt_handler, (void *)dev); @@ -439,7 +440,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) if (rte_eal_process_type() == RTE_PROC_PRIMARY) { /* Registering LSC interrupt handler */ - rte_intr_callback_register(&pci_dev->intr_handle, + rte_intr_callback_register(pci_dev->intr_handle, nfp_net_dev_interrupt_handler, (void *)eth_dev); /* Telling the firmware about the LSC interrupt entry */ diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 3b5c6615ad..fe4d675c0f 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -129,7 +129,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); struct ngbe_hw *hw = ngbe_dev_hw(eth_dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; const struct rte_memzone *mz; uint32_t ctrl_ext; int err; @@ -334,7 +334,7 @@ ngbe_dev_start(struct rte_eth_dev *dev) { struct ngbe_hw *hw = ngbe_dev_hw(dev); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint32_t intr_vector = 0; int err; bool link_up = false, negotiate = false; @@ -372,11 +372,10 @@ ngbe_dev_start(struct rte_eth_dev *dev) return -1; } - if (rte_intr_dp_is_en(intr_handle) && intr_handle->intr_vec == NULL) { - intr_handle->intr_vec = - rte_zmalloc("intr_vec", - dev->data->nb_rx_queues * sizeof(int), 0); - if (intr_handle->intr_vec == NULL) { + if (rte_intr_dp_is_en(intr_handle) && + !rte_intr_handle_vec_list_base(intr_handle)) { + if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec", + dev->data->nb_rx_queues)) { PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues intr_vec", dev->data->nb_rx_queues); @@ -503,7 +502,7 @@ ngbe_dev_stop(struct rte_eth_dev *dev) struct rte_eth_link link; struct ngbe_hw *hw = ngbe_dev_hw(dev); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; if (hw->adapter_stopped) return 0; @@ -540,10 +539,8 @@ ngbe_dev_stop(struct rte_eth_dev *dev) /* Clean datapath event and queue/vec mapping */ rte_intr_efd_disable(intr_handle); - if (intr_handle->intr_vec != NULL) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); hw->adapter_stopped = true; dev->data->dev_started = 0; @@ -559,7 +556,7 @@ ngbe_dev_close(struct rte_eth_dev *dev) { struct ngbe_hw *hw = ngbe_dev_hw(dev); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int retries = 0; int ret; @@ -1093,7 +1090,7 @@ static void ngbe_configure_msix(struct rte_eth_dev *dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct ngbe_hw *hw = ngbe_dev_hw(dev); uint32_t queue_id, base = NGBE_MISC_VEC_ID; uint32_t vec = NGBE_MISC_VEC_ID; @@ -1128,8 +1125,10 @@ ngbe_configure_msix(struct rte_eth_dev *dev) queue_id++) { /* by default, 1:1 mapping */ ngbe_set_ivar_map(hw, 0, queue_id, vec); - intr_handle->intr_vec[queue_id] = vec; - if (vec < base + intr_handle->nb_efd - 1) + rte_intr_handle_vec_list_index_set(intr_handle, + queue_id, vec); + if (vec < base + rte_intr_handle_nb_efd_get(intr_handle) + - 1) vec++; } diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c index b121488faf..3cdd19dc68 100644 --- a/drivers/net/octeontx2/otx2_ethdev_irq.c +++ b/drivers/net/octeontx2/otx2_ethdev_irq.c @@ -34,7 +34,7 @@ static int nix_lf_register_err_irq(struct rte_eth_dev *eth_dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct rte_intr_handle *handle = pci_dev->intr_handle; struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); int rc, vec; @@ -54,7 +54,7 @@ static void nix_lf_unregister_err_irq(struct rte_eth_dev *eth_dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct rte_intr_handle *handle = pci_dev->intr_handle; struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); int vec; @@ -90,7 +90,7 @@ static int nix_lf_register_ras_irq(struct rte_eth_dev *eth_dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct rte_intr_handle *handle = pci_dev->intr_handle; struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); int rc, vec; @@ -110,7 +110,7 @@ static void nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct rte_intr_handle *handle = pci_dev->intr_handle; struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); int vec; @@ -263,7 +263,7 @@ int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct rte_intr_handle *handle = pci_dev->intr_handle; struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); int vec, q, sqs, rqs, qs, rc = 0; @@ -308,7 +308,7 @@ void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct rte_intr_handle *handle = pci_dev->intr_handle; struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); int vec, q; @@ -332,7 +332,7 @@ int oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct rte_intr_handle *handle = pci_dev->intr_handle; struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); uint8_t rc = 0, vec, q; @@ -362,20 +362,21 @@ oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev) return rc; } - if (!handle->intr_vec) { - handle->intr_vec = rte_zmalloc("intr_vec", - dev->configured_cints * - sizeof(int), 0); - if (!handle->intr_vec) { - otx2_err("Failed to allocate %d rx intr_vec", - dev->configured_cints); - return -ENOMEM; + if (!rte_intr_handle_vec_list_base(handle)) { + rc = rte_intr_handle_vec_list_alloc(handle, "intr_vec", + dev->configured_cints); + if (rc) { + otx2_err("Fail to allocate intr vec list, " + "rc=%d", rc); + return rc; } } /* VFIO vector zero is resereved for misc interrupt so * doing required adjustment. (b13bfab4cd) */ - handle->intr_vec[q] = RTE_INTR_VEC_RXTX_OFFSET + vec; + if (rte_intr_handle_vec_list_index_set(handle, q, + RTE_INTR_VEC_RXTX_OFFSET + vec)) + return -1; /* Configure CQE interrupt coalescing parameters */ otx2_write64(((CQ_CQE_THRESH_DEFAULT) | @@ -395,7 +396,7 @@ void oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct rte_intr_handle *handle = pci_dev->intr_handle; struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); int vec, q; diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c index 323d46e6eb..b04e446030 100644 --- a/drivers/net/qede/qede_ethdev.c +++ b/drivers/net/qede/qede_ethdev.c @@ -1576,17 +1576,17 @@ static int qede_dev_close(struct rte_eth_dev *eth_dev) qdev->ops->common->slowpath_stop(edev); qdev->ops->common->remove(edev); - rte_intr_disable(&pci_dev->intr_handle); + rte_intr_disable(pci_dev->intr_handle); - switch (pci_dev->intr_handle.type) { + switch (rte_intr_handle_type_get(pci_dev->intr_handle)) { case RTE_INTR_HANDLE_UIO_INTX: case RTE_INTR_HANDLE_VFIO_LEGACY: - rte_intr_callback_unregister(&pci_dev->intr_handle, + rte_intr_callback_unregister(pci_dev->intr_handle, qede_interrupt_handler_intx, (void *)eth_dev); break; default: - rte_intr_callback_unregister(&pci_dev->intr_handle, + rte_intr_callback_unregister(pci_dev->intr_handle, qede_interrupt_handler, (void *)eth_dev); } @@ -2569,22 +2569,22 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf) } qede_update_pf_params(edev); - switch (pci_dev->intr_handle.type) { + switch (rte_intr_handle_type_get(pci_dev->intr_handle)) { case RTE_INTR_HANDLE_UIO_INTX: case RTE_INTR_HANDLE_VFIO_LEGACY: int_mode = ECORE_INT_MODE_INTA; - rte_intr_callback_register(&pci_dev->intr_handle, + rte_intr_callback_register(pci_dev->intr_handle, qede_interrupt_handler_intx, (void *)eth_dev); break; default: int_mode = ECORE_INT_MODE_MSIX; - rte_intr_callback_register(&pci_dev->intr_handle, + rte_intr_callback_register(pci_dev->intr_handle, qede_interrupt_handler, (void *)eth_dev); } - if (rte_intr_enable(&pci_dev->intr_handle)) { + if (rte_intr_enable(pci_dev->intr_handle)) { DP_ERR(edev, "rte_intr_enable() failed\n"); rc = -ENODEV; goto err; diff --git a/drivers/net/sfc/sfc_intr.c b/drivers/net/sfc/sfc_intr.c index c2298ed23c..7cf17d3e38 100644 --- a/drivers/net/sfc/sfc_intr.c +++ b/drivers/net/sfc/sfc_intr.c @@ -79,7 +79,7 @@ sfc_intr_line_handler(void *cb_arg) if (qmask & (1 << sa->mgmt_evq_index)) sfc_intr_handle_mgmt_evq(sa); - if (rte_intr_ack(&pci_dev->intr_handle) != 0) + if (rte_intr_ack(pci_dev->intr_handle) != 0) sfc_err(sa, "cannot reenable interrupts"); sfc_log_init(sa, "done"); @@ -123,7 +123,7 @@ sfc_intr_message_handler(void *cb_arg) sfc_intr_handle_mgmt_evq(sa); - if (rte_intr_ack(&pci_dev->intr_handle) != 0) + if (rte_intr_ack(pci_dev->intr_handle) != 0) sfc_err(sa, "cannot reenable interrupts"); sfc_log_init(sa, "done"); @@ -159,7 +159,7 @@ sfc_intr_start(struct sfc_adapter *sa) goto fail_intr_init; pci_dev = RTE_ETH_DEV_TO_PCI(sa->eth_dev); - intr_handle = &pci_dev->intr_handle; + intr_handle = pci_dev->intr_handle; if (intr->handler != NULL) { if (intr->rxq_intr && rte_intr_cap_multiple(intr_handle)) { @@ -171,16 +171,15 @@ sfc_intr_start(struct sfc_adapter *sa) goto fail_rte_intr_efd_enable; } if (rte_intr_dp_is_en(intr_handle)) { - intr_handle->intr_vec = - rte_calloc("intr_vec", - sa->eth_dev->data->nb_rx_queues, sizeof(int), - 0); - if (intr_handle->intr_vec == NULL) { + if (rte_intr_handle_vec_list_alloc(intr_handle, + "intr_vec", + sa->eth_dev->data->nb_rx_queues)) { sfc_err(sa, "Failed to allocate %d rx_queues intr_vec", sa->eth_dev->data->nb_rx_queues); goto fail_intr_vector_alloc; } + } sfc_log_init(sa, "rte_intr_callback_register"); @@ -215,15 +214,17 @@ sfc_intr_start(struct sfc_adapter *sa) } sfc_log_init(sa, "done type=%u max_intr=%d nb_efd=%u vec=%p", - intr_handle->type, intr_handle->max_intr, - intr_handle->nb_efd, intr_handle->intr_vec); + rte_intr_handle_type_get(intr_handle), + rte_intr_handle_max_intr_get(intr_handle), + rte_intr_handle_nb_efd_get(intr_handle), + rte_intr_handle_vec_list_base(intr_handle)); return 0; fail_rte_intr_enable: rte_intr_callback_unregister(intr_handle, intr->handler, (void *)sa); fail_rte_intr_cb_reg: - rte_free(intr_handle->intr_vec); + rte_intr_handle_vec_list_free(intr_handle); fail_intr_vector_alloc: rte_intr_efd_disable(intr_handle); @@ -250,9 +251,9 @@ sfc_intr_stop(struct sfc_adapter *sa) efx_intr_disable(sa->nic); - intr_handle = &pci_dev->intr_handle; + intr_handle = pci_dev->intr_handle; - rte_free(intr_handle->intr_vec); + rte_intr_handle_vec_list_free(intr_handle); rte_intr_efd_disable(intr_handle); if (rte_intr_disable(intr_handle) != 0) @@ -322,7 +323,7 @@ sfc_intr_attach(struct sfc_adapter *sa) sfc_log_init(sa, "entry"); - switch (pci_dev->intr_handle.type) { + switch (rte_intr_handle_type_get(pci_dev->intr_handle)) { #ifdef RTE_EXEC_ENV_LINUX case RTE_INTR_HANDLE_UIO_INTX: case RTE_INTR_HANDLE_VFIO_LEGACY: diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c index c515de3bf7..d6c92f8d30 100644 --- a/drivers/net/tap/rte_eth_tap.c +++ b/drivers/net/tap/rte_eth_tap.c @@ -1668,7 +1668,8 @@ tap_dev_intr_handler(void *cb_arg) struct rte_eth_dev *dev = cb_arg; struct pmd_internals *pmd = dev->data->dev_private; - tap_nl_recv(pmd->intr_handle.fd, tap_nl_msg_handler, dev); + tap_nl_recv(rte_intr_handle_fd_get(pmd->intr_handle), + tap_nl_msg_handler, dev); } static int @@ -1679,22 +1680,23 @@ tap_lsc_intr_handle_set(struct rte_eth_dev *dev, int set) /* In any case, disable interrupt if the conf is no longer there. */ if (!dev->data->dev_conf.intr_conf.lsc) { - if (pmd->intr_handle.fd != -1) { + if (rte_intr_handle_fd_get(pmd->intr_handle) != -1) goto clean; - } + return 0; } if (set) { - pmd->intr_handle.fd = tap_nl_init(RTMGRP_LINK); - if (unlikely(pmd->intr_handle.fd == -1)) + rte_intr_handle_fd_set(pmd->intr_handle, + tap_nl_init(RTMGRP_LINK)); + if (unlikely(rte_intr_handle_fd_get(pmd->intr_handle) == -1)) return -EBADF; return rte_intr_callback_register( - &pmd->intr_handle, tap_dev_intr_handler, dev); + pmd->intr_handle, tap_dev_intr_handler, dev); } clean: do { - ret = rte_intr_callback_unregister(&pmd->intr_handle, + ret = rte_intr_callback_unregister(pmd->intr_handle, tap_dev_intr_handler, dev); if (ret >= 0) { break; @@ -1707,8 +1709,8 @@ tap_lsc_intr_handle_set(struct rte_eth_dev *dev, int set) } } while (true); - tap_nl_final(pmd->intr_handle.fd); - pmd->intr_handle.fd = -1; + tap_nl_final(rte_intr_handle_fd_get(pmd->intr_handle)); + rte_intr_handle_fd_set(pmd->intr_handle, -1); return 0; } @@ -1923,6 +1925,15 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name, goto error_exit; } + /* Allocate interrupt instance */ + pmd->intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + true); + if (!pmd->intr_handle) { + TAP_LOG(ERR, "Failed to allocate intr handle"); + goto error_exit; + } + /* Setup some default values */ data = dev->data; data->dev_private = pmd; @@ -1940,9 +1951,9 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name, dev->rx_pkt_burst = pmd_rx_burst; dev->tx_pkt_burst = pmd_tx_burst; - pmd->intr_handle.type = RTE_INTR_HANDLE_EXT; - pmd->intr_handle.fd = -1; - dev->intr_handle = &pmd->intr_handle; + rte_intr_handle_type_set(pmd->intr_handle, RTE_INTR_HANDLE_EXT); + rte_intr_handle_fd_set(pmd->intr_handle, -1); + dev->intr_handle = pmd->intr_handle; /* Presetup the fds to -1 as being not valid */ for (i = 0; i < RTE_PMD_TAP_MAX_QUEUES; i++) { @@ -2093,6 +2104,8 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name, /* mac_addrs must not be freed alone because part of dev_private */ dev->data->mac_addrs = NULL; rte_eth_dev_release_port(dev); + if (pmd->intr_handle) + rte_intr_handle_instance_free(pmd->intr_handle); error_exit_nodev: TAP_LOG(ERR, "%s Unable to initialize %s", diff --git a/drivers/net/tap/rte_eth_tap.h b/drivers/net/tap/rte_eth_tap.h index a98ea11a33..996021e424 100644 --- a/drivers/net/tap/rte_eth_tap.h +++ b/drivers/net/tap/rte_eth_tap.h @@ -89,7 +89,7 @@ struct pmd_internals { LIST_HEAD(tap_implicit_flows, rte_flow) implicit_flows; struct rx_queue rxq[RTE_PMD_TAP_MAX_QUEUES]; /* List of RX queues */ struct tx_queue txq[RTE_PMD_TAP_MAX_QUEUES]; /* List of TX queues */ - struct rte_intr_handle intr_handle; /* LSC interrupt handle. */ + struct rte_intr_handle *intr_handle; /* LSC interrupt handle. */ int ka_fd; /* keep-alive file descriptor */ struct rte_mempool *gso_ctx_mp; /* Mempool for GSO packets */ }; diff --git a/drivers/net/tap/tap_intr.c b/drivers/net/tap/tap_intr.c index 1cacc15d9f..b1a339f8bd 100644 --- a/drivers/net/tap/tap_intr.c +++ b/drivers/net/tap/tap_intr.c @@ -29,12 +29,14 @@ static void tap_rx_intr_vec_uninstall(struct rte_eth_dev *dev) { struct pmd_internals *pmd = dev->data->dev_private; - struct rte_intr_handle *intr_handle = &pmd->intr_handle; + struct rte_intr_handle *intr_handle = pmd->intr_handle; rte_intr_free_epoll_fd(intr_handle); - free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - intr_handle->nb_efd = 0; + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); + rte_intr_handle_nb_efd_set(intr_handle, 0); + + rte_intr_handle_instance_free(intr_handle); } /** @@ -52,15 +54,15 @@ tap_rx_intr_vec_install(struct rte_eth_dev *dev) struct pmd_internals *pmd = dev->data->dev_private; struct pmd_process_private *process_private = dev->process_private; unsigned int rxqs_n = pmd->dev->data->nb_rx_queues; - struct rte_intr_handle *intr_handle = &pmd->intr_handle; + struct rte_intr_handle *intr_handle = pmd->intr_handle; unsigned int n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID); unsigned int i; unsigned int count = 0; if (!dev->data->dev_conf.intr_conf.rxq) return 0; - intr_handle->intr_vec = malloc(sizeof(int) * rxqs_n); - if (intr_handle->intr_vec == NULL) { + + if (rte_intr_handle_vec_list_alloc(intr_handle, NULL, rxqs_n)) { rte_errno = ENOMEM; TAP_LOG(ERR, "failed to allocate memory for interrupt vector," @@ -73,19 +75,24 @@ tap_rx_intr_vec_install(struct rte_eth_dev *dev) /* Skip queues that cannot request interrupts. */ if (!rxq || process_private->rxq_fds[i] == -1) { /* Use invalid intr_vec[] index to disable entry. */ - intr_handle->intr_vec[i] = - RTE_INTR_VEC_RXTX_OFFSET + - RTE_MAX_RXTX_INTR_VEC_ID; + if (rte_intr_handle_vec_list_index_set(intr_handle, i, + RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID)) + return -rte_errno; continue; } - intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count; - intr_handle->efds[count] = process_private->rxq_fds[i]; + if (rte_intr_handle_vec_list_index_set(intr_handle, i, + RTE_INTR_VEC_RXTX_OFFSET + count)) + return -rte_errno; + if (rte_intr_handle_efds_index_set(intr_handle, count, + process_private->rxq_fds[i])) + return -rte_errno; count++; } if (!count) tap_rx_intr_vec_uninstall(dev); else - intr_handle->nb_efd = count; + if (rte_intr_handle_nb_efd_set(intr_handle, count)) + return -rte_errno; return 0; } diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c index fc1844ddfc..8dacae980c 100644 --- a/drivers/net/thunderx/nicvf_ethdev.c +++ b/drivers/net/thunderx/nicvf_ethdev.c @@ -1876,6 +1876,9 @@ nicvf_dev_close(struct rte_eth_dev *dev) nicvf_periodic_alarm_stop(nicvf_vf_interrupt, nic->snicvf[i]); } + if (nic->intr_handle) + rte_intr_handle_instance_free(nic->intr_handle); + return 0; } @@ -2175,6 +2178,16 @@ nicvf_eth_dev_init(struct rte_eth_dev *eth_dev) goto fail; } + /* Allocate interrupt instance */ + nic->intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + true); + if (!nic->intr_handle) { + PMD_INIT_LOG(ERR, "Failed to allocate intr handle"); + ret = -ENODEV; + goto fail; + } + nicvf_disable_all_interrupts(nic); ret = nicvf_periodic_alarm_start(nicvf_interrupt, eth_dev); diff --git a/drivers/net/thunderx/nicvf_struct.h b/drivers/net/thunderx/nicvf_struct.h index 0ca207d0dd..c7ea13313e 100644 --- a/drivers/net/thunderx/nicvf_struct.h +++ b/drivers/net/thunderx/nicvf_struct.h @@ -100,7 +100,7 @@ struct nicvf { uint16_t subsystem_vendor_id; struct nicvf_rbdr *rbdr; struct nicvf_rss_reta_info rss_info; - struct rte_intr_handle intr_handle; + struct rte_intr_handle *intr_handle; uint8_t cpi_alg; uint16_t mtu; int skip_bytes; diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c index 0063994688..7095e7a4d2 100644 --- a/drivers/net/txgbe/txgbe_ethdev.c +++ b/drivers/net/txgbe/txgbe_ethdev.c @@ -547,7 +547,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused) struct txgbe_dcb_config *dcb_config = TXGBE_DEV_DCB_CONFIG(eth_dev); struct txgbe_filter_info *filter_info = TXGBE_DEV_FILTER(eth_dev); struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(eth_dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; const struct rte_memzone *mz; uint32_t ctrl_ext; uint16_t csum; @@ -1619,7 +1619,7 @@ txgbe_dev_start(struct rte_eth_dev *dev) struct txgbe_hw_stats *hw_stats = TXGBE_DEV_STATS(dev); struct txgbe_vf_info *vfinfo = *TXGBE_DEV_VFDATA(dev); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint32_t intr_vector = 0; int err; bool link_up = false, negotiate = 0; @@ -1680,17 +1680,15 @@ txgbe_dev_start(struct rte_eth_dev *dev) return -1; } - if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) { - intr_handle->intr_vec = - rte_zmalloc("intr_vec", - dev->data->nb_rx_queues * sizeof(int), 0); - if (intr_handle->intr_vec == NULL) { + if (rte_intr_dp_is_en(intr_handle) && + !rte_intr_handle_vec_list_base(intr_handle)) { + if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec", + dev->data->nb_rx_queues)) { PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues" " intr_vec", dev->data->nb_rx_queues); return -ENOMEM; } } - /* confiugre msix for sleep until rx interrupt */ txgbe_configure_msix(dev); @@ -1871,7 +1869,7 @@ txgbe_dev_stop(struct rte_eth_dev *dev) struct txgbe_hw *hw = TXGBE_DEV_HW(dev); struct txgbe_vf_info *vfinfo = *TXGBE_DEV_VFDATA(dev); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int vf; struct txgbe_tm_conf *tm_conf = TXGBE_DEV_TM_CONF(dev); @@ -1921,10 +1919,8 @@ txgbe_dev_stop(struct rte_eth_dev *dev) /* Clean datapath event and queue/vec mapping */ rte_intr_efd_disable(intr_handle); - if (intr_handle->intr_vec != NULL) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); /* reset hierarchy commit */ tm_conf->committed = false; @@ -1987,7 +1983,7 @@ txgbe_dev_close(struct rte_eth_dev *dev) { struct txgbe_hw *hw = TXGBE_DEV_HW(dev); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int retries = 0; int ret; @@ -3107,7 +3103,7 @@ txgbe_dev_interrupt_delayed_handler(void *param) { struct rte_eth_dev *dev = (struct rte_eth_dev *)param; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev); struct txgbe_hw *hw = TXGBE_DEV_HW(dev); uint32_t eicr; @@ -3640,7 +3636,7 @@ static int txgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint32_t mask; struct txgbe_hw *hw = TXGBE_DEV_HW(dev); @@ -3722,7 +3718,7 @@ static void txgbe_configure_msix(struct rte_eth_dev *dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct txgbe_hw *hw = TXGBE_DEV_HW(dev); uint32_t queue_id, base = TXGBE_MISC_VEC_ID; uint32_t vec = TXGBE_MISC_VEC_ID; @@ -3756,8 +3752,10 @@ txgbe_configure_msix(struct rte_eth_dev *dev) queue_id++) { /* by default, 1:1 mapping */ txgbe_set_ivar_map(hw, 0, queue_id, vec); - intr_handle->intr_vec[queue_id] = vec; - if (vec < base + intr_handle->nb_efd - 1) + rte_intr_handle_vec_list_index_set(intr_handle, + queue_id, vec); + if (vec < base + rte_intr_handle_nb_efd_get(intr_handle) + - 1) vec++; } diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index 18ed94bd27..24222daafd 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -166,7 +166,7 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev) int err; uint32_t tc, tcs; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct txgbe_hw *hw = TXGBE_DEV_HW(eth_dev); struct txgbe_vfta *shadow_vfta = TXGBE_DEV_VFTA(eth_dev); struct txgbe_hwstrip *hwstrip = TXGBE_DEV_HWSTRIP(eth_dev); @@ -613,7 +613,7 @@ txgbevf_dev_start(struct rte_eth_dev *dev) struct txgbe_hw *hw = TXGBE_DEV_HW(dev); uint32_t intr_vector = 0; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int err, mask = 0; @@ -674,11 +674,10 @@ txgbevf_dev_start(struct rte_eth_dev *dev) return -1; } - if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) { - intr_handle->intr_vec = - rte_zmalloc("intr_vec", - dev->data->nb_rx_queues * sizeof(int), 0); - if (intr_handle->intr_vec == NULL) { + if (rte_intr_dp_is_en(intr_handle) && + !rte_intr_handle_vec_list_base(intr_handle)) { + if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec", + dev->data->nb_rx_queues)) { PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues" " intr_vec", dev->data->nb_rx_queues); return -ENOMEM; @@ -717,7 +716,7 @@ txgbevf_dev_stop(struct rte_eth_dev *dev) struct txgbe_hw *hw = TXGBE_DEV_HW(dev); struct txgbe_adapter *adapter = TXGBE_DEV_ADAPTER(dev); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; if (hw->adapter_stopped) return 0; @@ -744,10 +743,8 @@ txgbevf_dev_stop(struct rte_eth_dev *dev) /* Clean datapath event and queue/vec mapping */ rte_intr_efd_disable(intr_handle); - if (intr_handle->intr_vec != NULL) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); adapter->rss_reta_updated = 0; hw->dev_start = false; @@ -760,7 +757,7 @@ txgbevf_dev_close(struct rte_eth_dev *dev) { struct txgbe_hw *hw = TXGBE_DEV_HW(dev); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; int ret; PMD_INIT_FUNC_TRACE(); @@ -921,7 +918,7 @@ static int txgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev); struct txgbe_hw *hw = TXGBE_DEV_HW(dev); uint32_t vec = TXGBE_MISC_VEC_ID; @@ -943,7 +940,7 @@ txgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev); struct txgbe_hw *hw = TXGBE_DEV_HW(dev); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; uint32_t vec = TXGBE_MISC_VEC_ID; if (rte_intr_allow_others(intr_handle)) @@ -983,7 +980,7 @@ static void txgbevf_configure_msix(struct rte_eth_dev *dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct txgbe_hw *hw = TXGBE_DEV_HW(dev); uint32_t q_idx; uint32_t vector_idx = TXGBE_MISC_VEC_ID; @@ -1009,8 +1006,10 @@ txgbevf_configure_msix(struct rte_eth_dev *dev) * as TXGBE_VF_MAXMSIVECOTR = 1 */ txgbevf_set_ivar_map(hw, 0, q_idx, vector_idx); - intr_handle->intr_vec[q_idx] = vector_idx; - if (vector_idx < base + intr_handle->nb_efd - 1) + rte_intr_handle_vec_list_index_set(intr_handle, q_idx, + vector_idx); + if (vector_idx < base + rte_intr_handle_nb_efd_get(intr_handle) + - 1) vector_idx++; } diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c index a202931e9a..a595352e63 100644 --- a/drivers/net/vhost/rte_eth_vhost.c +++ b/drivers/net/vhost/rte_eth_vhost.c @@ -529,40 +529,43 @@ static int eth_vhost_update_intr(struct rte_eth_dev *eth_dev, uint16_t rxq_idx) { struct rte_intr_handle *handle = eth_dev->intr_handle; - struct rte_epoll_event rev; + struct rte_epoll_event rev, *elist; int epfd, ret; if (!handle) return 0; - if (handle->efds[rxq_idx] == handle->elist[rxq_idx].fd) + elist = rte_intr_handle_elist_index_get(handle, rxq_idx); + if (rte_intr_handle_efds_index_get(handle, rxq_idx) == elist->fd) return 0; VHOST_LOG(INFO, "kickfd for rxq-%d was changed, updating handler.\n", rxq_idx); - if (handle->elist[rxq_idx].fd != -1) + if (elist->fd != -1) VHOST_LOG(ERR, "Unexpected previous kickfd value (Got %d, expected -1).\n", - handle->elist[rxq_idx].fd); + elist->fd); /* * First remove invalid epoll event, and then install * the new one. May be solved with a proper API in the * future. */ - epfd = handle->elist[rxq_idx].epfd; - rev = handle->elist[rxq_idx]; + epfd = elist->epfd; + rev = *elist; ret = rte_epoll_ctl(epfd, EPOLL_CTL_DEL, rev.fd, - &handle->elist[rxq_idx]); + elist); if (ret) { VHOST_LOG(ERR, "Delete epoll event failed.\n"); return ret; } - rev.fd = handle->efds[rxq_idx]; - handle->elist[rxq_idx] = rev; - ret = rte_epoll_ctl(epfd, EPOLL_CTL_ADD, rev.fd, - &handle->elist[rxq_idx]); + rev.fd = rte_intr_handle_efds_index_get(handle, rxq_idx); + if (rte_intr_handle_elist_index_set(handle, rxq_idx, rev)) + return -rte_errno; + + elist = rte_intr_handle_elist_index_get(handle, rxq_idx); + ret = rte_epoll_ctl(epfd, EPOLL_CTL_ADD, rev.fd, elist); if (ret) { VHOST_LOG(ERR, "Add epoll event failed.\n"); return ret; @@ -641,9 +644,10 @@ eth_vhost_uninstall_intr(struct rte_eth_dev *dev) struct rte_intr_handle *intr_handle = dev->intr_handle; if (intr_handle) { - if (intr_handle->intr_vec) - free(intr_handle->intr_vec); - free(intr_handle); + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); + + rte_intr_handle_instance_free(intr_handle); } dev->intr_handle = NULL; @@ -662,29 +666,32 @@ eth_vhost_install_intr(struct rte_eth_dev *dev) if (dev->intr_handle) eth_vhost_uninstall_intr(dev); - dev->intr_handle = malloc(sizeof(*dev->intr_handle)); + dev->intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + false); if (!dev->intr_handle) { VHOST_LOG(ERR, "Fail to allocate intr_handle\n"); return -ENOMEM; } - memset(dev->intr_handle, 0, sizeof(*dev->intr_handle)); - - dev->intr_handle->efd_counter_size = sizeof(uint64_t); + if (rte_intr_handle_efd_counter_size_set(dev->intr_handle, + sizeof(uint64_t))) + return -rte_errno; - dev->intr_handle->intr_vec = - malloc(nb_rxq * sizeof(dev->intr_handle->intr_vec[0])); - - if (!dev->intr_handle->intr_vec) { + if (rte_intr_handle_vec_list_alloc(dev->intr_handle, NULL, nb_rxq)) { VHOST_LOG(ERR, "Failed to allocate memory for interrupt vector\n"); - free(dev->intr_handle); + rte_intr_handle_instance_free(dev->intr_handle); return -ENOMEM; } + VHOST_LOG(INFO, "Prepare intr vec\n"); for (i = 0; i < nb_rxq; i++) { - dev->intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i; - dev->intr_handle->efds[i] = -1; + if (rte_intr_handle_vec_list_index_set(dev->intr_handle, i, + RTE_INTR_VEC_RXTX_OFFSET + i)) + return -rte_errno; + if (rte_intr_handle_efds_index_set(dev->intr_handle, i, -1)) + return -rte_errno; vq = dev->data->rx_queues[i]; if (!vq) { VHOST_LOG(INFO, "rxq-%d not setup yet, skip!\n", i); @@ -703,13 +710,21 @@ eth_vhost_install_intr(struct rte_eth_dev *dev) "rxq-%d's kickfd is invalid, skip!\n", i); continue; } - dev->intr_handle->efds[i] = vring.kickfd; + + if (rte_intr_handle_efds_index_set(dev->intr_handle, i, + vring.kickfd)) + continue; VHOST_LOG(INFO, "Installed intr vec for rxq-%d\n", i); } - dev->intr_handle->nb_efd = nb_rxq; - dev->intr_handle->max_intr = nb_rxq + 1; - dev->intr_handle->type = RTE_INTR_HANDLE_VDEV; + if (rte_intr_handle_nb_efd_set(dev->intr_handle, nb_rxq)) + return -rte_errno; + + if (rte_intr_handle_max_intr_set(dev->intr_handle, nb_rxq + 1)) + return -rte_errno; + + if (rte_intr_handle_type_set(dev->intr_handle, RTE_INTR_HANDLE_VDEV)) + return -rte_errno; return 0; } @@ -914,7 +929,10 @@ vring_conf_update(int vid, struct rte_eth_dev *eth_dev, uint16_t vring_id) vring_id); return ret; } - eth_dev->intr_handle->efds[rx_idx] = vring.kickfd; + + if (rte_intr_handle_efds_index_set(eth_dev->intr_handle, rx_idx, + vring.kickfd)) + return -rte_errno; vq = eth_dev->data->rx_queues[rx_idx]; if (!vq) { diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index e58085a2c9..4de1c929a9 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -722,8 +722,8 @@ virtio_dev_close(struct rte_eth_dev *dev) if (intr_conf->lsc || intr_conf->rxq) { virtio_intr_disable(dev); rte_intr_efd_disable(dev->intr_handle); - rte_free(dev->intr_handle->intr_vec); - dev->intr_handle->intr_vec = NULL; + if (rte_intr_handle_vec_list_base(dev->intr_handle)) + rte_intr_handle_vec_list_free(dev->intr_handle); } virtio_reset(hw); @@ -1634,7 +1634,9 @@ virtio_queues_bind_intr(struct rte_eth_dev *dev) PMD_INIT_LOG(INFO, "queue/interrupt binding"); for (i = 0; i < dev->data->nb_rx_queues; ++i) { - dev->intr_handle->intr_vec[i] = i + 1; + if (rte_intr_handle_vec_list_index_set(dev->intr_handle, i, + i + 1)) + return -rte_errno; if (VIRTIO_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], i + 1) == VIRTIO_MSI_NO_VECTOR) { PMD_DRV_LOG(ERR, "failed to set queue vector"); @@ -1673,11 +1675,10 @@ virtio_configure_intr(struct rte_eth_dev *dev) return -1; } - if (!dev->intr_handle->intr_vec) { - dev->intr_handle->intr_vec = - rte_zmalloc("intr_vec", - hw->max_queue_pairs * sizeof(int), 0); - if (!dev->intr_handle->intr_vec) { + if (!rte_intr_handle_vec_list_base(dev->intr_handle)) { + if (rte_intr_handle_vec_list_alloc(dev->intr_handle, + "intr_vec", + hw->max_queue_pairs)) { PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors", hw->max_queue_pairs); return -ENOMEM; diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c index 16c58710d7..3d0ce9458c 100644 --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c @@ -407,22 +407,40 @@ virtio_user_fill_intr_handle(struct virtio_user_dev *dev) struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id]; if (!eth_dev->intr_handle) { - eth_dev->intr_handle = malloc(sizeof(*eth_dev->intr_handle)); + eth_dev->intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + false); if (!eth_dev->intr_handle) { - PMD_DRV_LOG(ERR, "(%s) failed to allocate intr_handle", dev->path); + PMD_DRV_LOG(ERR, "(%s) failed to allocate intr_handle", + dev->path); return -1; } - memset(eth_dev->intr_handle, 0, sizeof(*eth_dev->intr_handle)); } for (i = 0; i < dev->max_queue_pairs; ++i) - eth_dev->intr_handle->efds[i] = dev->callfds[i]; - eth_dev->intr_handle->nb_efd = dev->max_queue_pairs; - eth_dev->intr_handle->max_intr = dev->max_queue_pairs + 1; - eth_dev->intr_handle->type = RTE_INTR_HANDLE_VDEV; + if (rte_intr_handle_efds_index_set(eth_dev->intr_handle, i, + dev->callfds[i])) + return -rte_errno; + + if (rte_intr_handle_nb_efd_set(eth_dev->intr_handle, + dev->max_queue_pairs)) + return -rte_errno; + + if (rte_intr_handle_max_intr_set(eth_dev->intr_handle, + dev->max_queue_pairs + 1)) + return -rte_errno; + + if (rte_intr_handle_type_set(eth_dev->intr_handle, + RTE_INTR_HANDLE_VDEV)) + return -rte_errno; + /* For virtio vdev, no need to read counter for clean */ - eth_dev->intr_handle->efd_counter_size = 0; - eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev); + if (rte_intr_handle_efd_counter_size_set(eth_dev->intr_handle, 0)) + return -rte_errno; + + if (rte_intr_handle_fd_set(eth_dev->intr_handle, + dev->ops->get_intr_fd(dev))) + return -rte_errno; return 0; } @@ -657,7 +675,7 @@ virtio_user_dev_uninit(struct virtio_user_dev *dev) struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id]; if (eth_dev->intr_handle) { - free(eth_dev->intr_handle); + rte_intr_handle_instance_free(eth_dev->intr_handle); eth_dev->intr_handle = NULL; } @@ -962,7 +980,7 @@ virtio_user_dev_delayed_disconnect_handler(void *param) return; } PMD_DRV_LOG(DEBUG, "Unregistering intr fd: %d", - eth_dev->intr_handle->fd); + rte_intr_handle_fd_get(eth_dev->intr_handle)); if (rte_intr_callback_unregister(eth_dev->intr_handle, virtio_interrupt_handler, eth_dev) != 1) @@ -972,10 +990,11 @@ virtio_user_dev_delayed_disconnect_handler(void *param) if (dev->ops->server_disconnect) dev->ops->server_disconnect(dev); - eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev); + rte_intr_handle_fd_set(eth_dev->intr_handle, + dev->ops->get_intr_fd(dev)); PMD_DRV_LOG(DEBUG, "Registering intr fd: %d", - eth_dev->intr_handle->fd); + rte_intr_handle_fd_get(eth_dev->intr_handle)); if (rte_intr_callback_register(eth_dev->intr_handle, virtio_interrupt_handler, @@ -996,16 +1015,18 @@ virtio_user_dev_delayed_intr_reconfig_handler(void *param) struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id]; PMD_DRV_LOG(DEBUG, "Unregistering intr fd: %d", - eth_dev->intr_handle->fd); + rte_intr_handle_fd_get(eth_dev->intr_handle)); if (rte_intr_callback_unregister(eth_dev->intr_handle, virtio_interrupt_handler, eth_dev) != 1) PMD_DRV_LOG(ERR, "interrupt unregister failed"); - eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev); + rte_intr_handle_fd_set(eth_dev->intr_handle, + dev->ops->get_intr_fd(dev)); - PMD_DRV_LOG(DEBUG, "Registering intr fd: %d", eth_dev->intr_handle->fd); + PMD_DRV_LOG(DEBUG, "Registering intr fd: %d", + rte_intr_handle_fd_get(eth_dev->intr_handle)); if (rte_intr_callback_register(eth_dev->intr_handle, virtio_interrupt_handler, eth_dev)) diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c index 1a3291273a..1d0b61d9f2 100644 --- a/drivers/net/vmxnet3/vmxnet3_ethdev.c +++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c @@ -620,11 +620,10 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev) return -1; } - if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) { - intr_handle->intr_vec = - rte_zmalloc("intr_vec", - dev->data->nb_rx_queues * sizeof(int), 0); - if (intr_handle->intr_vec == NULL) { + if (rte_intr_dp_is_en(intr_handle) && + !rte_intr_handle_vec_list_base(intr_handle)) { + if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec", + dev->data->nb_rx_queues)) { PMD_INIT_LOG(ERR, "Failed to allocate %d Rx queues intr_vec", dev->data->nb_rx_queues); rte_intr_efd_disable(intr_handle); @@ -635,8 +634,7 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev) if (!rte_intr_allow_others(intr_handle) && dev->data->dev_conf.intr_conf.lsc != 0) { PMD_INIT_LOG(ERR, "not enough intr vector to support both Rx interrupt and LSC"); - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; + rte_intr_handle_vec_list_free(intr_handle); rte_intr_efd_disable(intr_handle); return -1; } @@ -644,17 +642,19 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev) /* if we cannot allocate one MSI-X vector per queue, don't enable * interrupt mode. */ - if (hw->intr.num_intrs != (intr_handle->nb_efd + 1)) { + if (hw->intr.num_intrs != + (rte_intr_handle_nb_efd_get(intr_handle) + 1)) { PMD_INIT_LOG(ERR, "Device configured with %d Rx intr vectors, expecting %d", - hw->intr.num_intrs, intr_handle->nb_efd + 1); - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; + hw->intr.num_intrs, + rte_intr_handle_nb_efd_get(intr_handle) + 1); + rte_intr_handle_vec_list_free(intr_handle); rte_intr_efd_disable(intr_handle); return -1; } for (i = 0; i < dev->data->nb_rx_queues; i++) - intr_handle->intr_vec[i] = i + 1; + if (rte_intr_handle_vec_list_index_set(intr_handle, i, i + 1)) + return -rte_errno; for (i = 0; i < hw->intr.num_intrs; i++) hw->intr.mod_levels[i] = UPT1_IML_ADAPTIVE; @@ -802,7 +802,9 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev) if (hw->intr.lsc_only) tqd->conf.intrIdx = 1; else - tqd->conf.intrIdx = intr_handle->intr_vec[i]; + tqd->conf.intrIdx = + rte_intr_handle_vec_list_index_get(intr_handle, + i); tqd->status.stopped = TRUE; tqd->status.error = 0; memset(&tqd->stats, 0, sizeof(tqd->stats)); @@ -825,7 +827,9 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev) if (hw->intr.lsc_only) rqd->conf.intrIdx = 1; else - rqd->conf.intrIdx = intr_handle->intr_vec[i]; + rqd->conf.intrIdx = + rte_intr_handle_vec_list_index_get(intr_handle, + i); rqd->status.stopped = TRUE; rqd->status.error = 0; memset(&rqd->stats, 0, sizeof(rqd->stats)); @@ -1022,10 +1026,8 @@ vmxnet3_dev_stop(struct rte_eth_dev *dev) /* Clean datapath event and queue/vector mapping */ rte_intr_efd_disable(intr_handle); - if (intr_handle->intr_vec != NULL) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); /* quiesce the device first */ VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_QUIESCE_DEV); @@ -1677,7 +1679,9 @@ vmxnet3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) { struct vmxnet3_hw *hw = dev->data->dev_private; - vmxnet3_enable_intr(hw, dev->intr_handle->intr_vec[queue_id]); + vmxnet3_enable_intr(hw, + rte_intr_handle_vec_list_index_get(dev->intr_handle, + queue_id)); return 0; } @@ -1687,7 +1691,8 @@ vmxnet3_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) { struct vmxnet3_hw *hw = dev->data->dev_private; - vmxnet3_disable_intr(hw, dev->intr_handle->intr_vec[queue_id]); + vmxnet3_disable_intr(hw, + rte_intr_handle_vec_list_index_get(dev->intr_handle, queue_id)); return 0; } diff --git a/drivers/raw/ifpga/ifpga_rawdev.c b/drivers/raw/ifpga/ifpga_rawdev.c index 76e6a8530b..4fbe25080e 100644 --- a/drivers/raw/ifpga/ifpga_rawdev.c +++ b/drivers/raw/ifpga/ifpga_rawdev.c @@ -73,7 +73,7 @@ static pthread_t ifpga_monitor_start_thread; #define IFPGA_MAX_IRQ 12 /* 0 for FME interrupt, others are reserved for AFU irq */ -static struct rte_intr_handle ifpga_irq_handle[IFPGA_MAX_IRQ]; +static struct rte_intr_handle *ifpga_irq_handle; static struct ifpga_rawdev * ifpga_rawdev_allocate(struct rte_rawdev *rawdev); @@ -1345,17 +1345,23 @@ ifpga_unregister_msix_irq(enum ifpga_irq_type type, int vec_start, rte_intr_callback_fn handler, void *arg) { struct rte_intr_handle *intr_handle; + int rc; if (type == IFPGA_FME_IRQ) - intr_handle = &ifpga_irq_handle[0]; + intr_handle = + rte_intr_handle_instance_index_get(ifpga_irq_handle, 0); else if (type == IFPGA_AFU_IRQ) - intr_handle = &ifpga_irq_handle[vec_start + 1]; + intr_handle = rte_intr_handle_instance_index_get( + ifpga_irq_handle, vec_start + 1); else return 0; rte_intr_efd_disable(intr_handle); - return rte_intr_callback_unregister(intr_handle, handler, arg); + rc = rte_intr_callback_unregister(intr_handle, handler, arg); + + rte_intr_handle_instance_free(ifpga_irq_handle); + return rc; } int @@ -1370,6 +1376,10 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id, struct opae_manager *mgr; struct opae_accelerator *acc; + ifpga_irq_handle = rte_intr_handle_instance_alloc(IFPGA_MAX_IRQ, false); + if (!ifpga_irq_handle) + return -ENOMEM; + adapter = ifpga_rawdev_get_priv(dev); if (!adapter) return -ENODEV; @@ -1379,29 +1389,35 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id, return -ENODEV; if (type == IFPGA_FME_IRQ) { - intr_handle = &ifpga_irq_handle[0]; + intr_handle = + rte_intr_handle_instance_index_get(ifpga_irq_handle, 0); count = 1; } else if (type == IFPGA_AFU_IRQ) { - intr_handle = &ifpga_irq_handle[vec_start + 1]; + intr_handle = rte_intr_handle_instance_index_get( + ifpga_irq_handle, vec_start + 1); } else { return -EINVAL; } - intr_handle->type = RTE_INTR_HANDLE_VFIO_MSIX; + if (rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSIX)) + return -rte_errno; ret = rte_intr_efd_enable(intr_handle, count); if (ret) return -ENODEV; - intr_handle->fd = intr_handle->efds[0]; + if (rte_intr_handle_fd_set(intr_handle, + rte_intr_handle_efds_index_get(intr_handle, 0))) + return -rte_errno; IFPGA_RAWDEV_PMD_DEBUG("register %s irq, vfio_fd=%d, fd=%d\n", - name, intr_handle->vfio_dev_fd, - intr_handle->fd); + name, rte_intr_handle_dev_fd_get(intr_handle), + rte_intr_handle_fd_get(intr_handle)); if (type == IFPGA_FME_IRQ) { struct fpga_fme_err_irq_set err_irq_set; - err_irq_set.evtfd = intr_handle->efds[0]; + err_irq_set.evtfd = rte_intr_handle_efds_index_get(intr_handle, + 0); ret = opae_manager_ifpga_set_err_irq(mgr, &err_irq_set); if (ret) @@ -1412,7 +1428,7 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id, return -EINVAL; ret = opae_acc_set_irq(acc, vec_start, count, - intr_handle->efds); + rte_intr_handle_efds_base(intr_handle)); if (ret) return -EINVAL; } @@ -1491,7 +1507,7 @@ ifpga_rawdev_create(struct rte_pci_device *pci_dev, data->bus = pci_dev->addr.bus; data->devid = pci_dev->addr.devid; data->function = pci_dev->addr.function; - data->vfio_dev_fd = pci_dev->intr_handle.vfio_dev_fd; + data->vfio_dev_fd = rte_intr_handle_dev_fd_get(pci_dev->intr_handle); adapter = rawdev->dev_private; /* create a opae_adapter based on above device data */ diff --git a/drivers/raw/ntb/ntb.c b/drivers/raw/ntb/ntb.c index 78cfcd79f7..5497ef2906 100644 --- a/drivers/raw/ntb/ntb.c +++ b/drivers/raw/ntb/ntb.c @@ -1044,13 +1044,11 @@ ntb_dev_close(struct rte_rawdev *dev) ntb_queue_release(dev, i); hw->queue_pairs = 0; - intr_handle = &hw->pci_dev->intr_handle; + intr_handle = hw->pci_dev->intr_handle; /* Clean datapath event and vec mapping */ rte_intr_efd_disable(intr_handle); - if (intr_handle->intr_vec) { - rte_free(intr_handle->intr_vec); - intr_handle->intr_vec = NULL; - } + if (rte_intr_handle_vec_list_base(intr_handle)) + rte_intr_handle_vec_list_free(intr_handle); /* Disable uio intr before callback unregister */ rte_intr_disable(intr_handle); @@ -1402,7 +1400,7 @@ ntb_init_hw(struct rte_rawdev *dev, struct rte_pci_device *pci_dev) /* Init doorbell. */ hw->db_valid_mask = RTE_LEN2MASK(hw->db_cnt, uint64_t); - intr_handle = &pci_dev->intr_handle; + intr_handle = pci_dev->intr_handle; /* Register callback func to eal lib */ rte_intr_callback_register(intr_handle, ntb_dev_intr_handler, dev); diff --git a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c index 620d5c9122..f8031d0f72 100644 --- a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c +++ b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c @@ -31,7 +31,7 @@ ree_lf_err_intr_unregister(const struct rte_regexdev *dev, uint16_t msix_off, uintptr_t base) { struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); - struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct rte_intr_handle *handle = pci_dev->intr_handle; /* Disable error interrupts */ otx2_write64(~0ull, base + OTX2_REE_LF_MISC_INT_ENA_W1C); @@ -61,7 +61,7 @@ ree_lf_err_intr_register(const struct rte_regexdev *dev, uint16_t msix_off, uintptr_t base) { struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); - struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct rte_intr_handle *handle = pci_dev->intr_handle; int ret; /* Disable error interrupts */ diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c b/drivers/vdpa/ifc/ifcvf_vdpa.c index 1dc813d0a3..90b9a73f6a 100644 --- a/drivers/vdpa/ifc/ifcvf_vdpa.c +++ b/drivers/vdpa/ifc/ifcvf_vdpa.c @@ -162,7 +162,7 @@ ifcvf_vfio_setup(struct ifcvf_internal *internal) if (rte_pci_map_device(dev)) goto err; - internal->vfio_dev_fd = dev->intr_handle.vfio_dev_fd; + internal->vfio_dev_fd = rte_intr_handle_dev_fd_get(dev->intr_handle); for (i = 0; i < RTE_MIN(PCI_MAX_RESOURCE, IFCVF_PCI_MAX_RESOURCE); i++) { @@ -365,7 +365,8 @@ vdpa_enable_vfio_intr(struct ifcvf_internal *internal, bool m_rx) irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX; irq_set->start = 0; fd_ptr = (int *)&irq_set->data; - fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = internal->pdev->intr_handle.fd; + fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = + rte_intr_handle_fd_get(internal->pdev->intr_handle); for (i = 0; i < nr_vring; i++) internal->intr_fd[i] = -1; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 6d17d7a6f3..27dc50cc57 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -698,6 +698,13 @@ mlx5_vdpa_dev_probe(struct rte_device *dev) DRV_LOG(ERR, "Failed to allocate VAR %u.", errno); goto error; } + priv->err_intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + true); + if (!priv->err_intr_handle) { + DRV_LOG(ERR, "Fail to allocate intr_handle"); + goto error; + } priv->vdev = rte_vdpa_register_device(dev, &mlx5_vdpa_ops); if (priv->vdev == NULL) { DRV_LOG(ERR, "Failed to register vDPA device."); @@ -716,6 +723,8 @@ mlx5_vdpa_dev_probe(struct rte_device *dev) if (priv) { if (priv->var) mlx5_glue->dv_free_var(priv->var); + if (priv->err_intr_handle) + rte_intr_handle_instance_free(priv->err_intr_handle); rte_free(priv); } if (ctx) @@ -750,6 +759,8 @@ mlx5_vdpa_dev_remove(struct rte_device *dev) rte_vdpa_unregister_device(priv->vdev); mlx5_glue->close_device(priv->ctx); pthread_mutex_destroy(&priv->vq_config_lock); + if (priv->err_intr_handle) + rte_intr_handle_instance_free(priv->err_intr_handle); rte_free(priv); } return 0; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 2a04e36607..f72cb358ec 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -92,7 +92,7 @@ struct mlx5_vdpa_virtq { void *buf; uint32_t size; } umems[3]; - struct rte_intr_handle intr_handle; + struct rte_intr_handle *intr_handle; uint64_t err_time[3]; /* RDTSC time of recent errors. */ uint32_t n_retry; struct mlx5_devx_virtio_q_couners_attr reset; @@ -142,7 +142,7 @@ struct mlx5_vdpa_priv { struct mlx5dv_devx_event_channel *eventc; struct mlx5dv_devx_event_channel *err_chnl; struct mlx5dv_devx_uar *uar; - struct rte_intr_handle err_intr_handle; + struct rte_intr_handle *err_intr_handle; struct mlx5_devx_obj *td; struct mlx5_devx_obj *tiss[16]; /* TIS list for each LAG port. */ uint16_t nr_virtqs; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index 3541c652ce..1f3da2461a 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -410,12 +410,18 @@ mlx5_vdpa_err_event_setup(struct mlx5_vdpa_priv *priv) DRV_LOG(ERR, "Failed to change device event channel FD."); goto error; } - priv->err_intr_handle.fd = priv->err_chnl->fd; - priv->err_intr_handle.type = RTE_INTR_HANDLE_EXT; - if (rte_intr_callback_register(&priv->err_intr_handle, + + if (rte_intr_handle_fd_set(priv->err_intr_handle, priv->err_chnl->fd)) + goto error; + + if (rte_intr_handle_type_set(priv->err_intr_handle, + RTE_INTR_HANDLE_EXT)) + goto error; + + if (rte_intr_callback_register(priv->err_intr_handle, mlx5_vdpa_err_interrupt_handler, priv)) { - priv->err_intr_handle.fd = 0; + rte_intr_handle_fd_set(priv->err_intr_handle, 0); DRV_LOG(ERR, "Failed to register error interrupt for device %d.", priv->vid); goto error; @@ -435,20 +441,20 @@ mlx5_vdpa_err_event_unset(struct mlx5_vdpa_priv *priv) int retries = MLX5_VDPA_INTR_RETRIES; int ret = -EAGAIN; - if (!priv->err_intr_handle.fd) + if (!rte_intr_handle_fd_get(priv->err_intr_handle)) return; while (retries-- && ret == -EAGAIN) { - ret = rte_intr_callback_unregister(&priv->err_intr_handle, + ret = rte_intr_callback_unregister(priv->err_intr_handle, mlx5_vdpa_err_interrupt_handler, priv); if (ret == -EAGAIN) { DRV_LOG(DEBUG, "Try again to unregister fd %d " "of error interrupt, retries = %d.", - priv->err_intr_handle.fd, retries); + rte_intr_handle_fd_get(priv->err_intr_handle), + retries); rte_pause(); } } - memset(&priv->err_intr_handle, 0, sizeof(priv->err_intr_handle)); if (priv->err_chnl) { #ifdef HAVE_IBV_DEVX_EVENT union { diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index f530646058..b9d03953ac 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -24,7 +24,8 @@ mlx5_vdpa_virtq_handler(void *cb_arg) int nbytes; do { - nbytes = read(virtq->intr_handle.fd, &buf, 8); + nbytes = read(rte_intr_handle_fd_get(virtq->intr_handle), &buf, + 8); if (nbytes < 0) { if (errno == EINTR || errno == EWOULDBLOCK || @@ -57,21 +58,24 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) int retries = MLX5_VDPA_INTR_RETRIES; int ret = -EAGAIN; - if (virtq->intr_handle.fd != -1) { + if (rte_intr_handle_fd_get(virtq->intr_handle) != -1) { while (retries-- && ret == -EAGAIN) { - ret = rte_intr_callback_unregister(&virtq->intr_handle, + ret = rte_intr_callback_unregister(virtq->intr_handle, mlx5_vdpa_virtq_handler, virtq); if (ret == -EAGAIN) { DRV_LOG(DEBUG, "Try again to unregister fd %d " - "of virtq %d interrupt, retries = %d.", - virtq->intr_handle.fd, - (int)virtq->index, retries); + "of virtq %d interrupt, retries = %d.", + rte_intr_handle_fd_get(virtq->intr_handle), + (int)virtq->index, retries); + usleep(MLX5_VDPA_INTR_RETRIES_USEC); } } - virtq->intr_handle.fd = -1; + rte_intr_handle_fd_set(virtq->intr_handle, -1); } + if (virtq->intr_handle) + rte_intr_handle_instance_free(virtq->intr_handle); if (virtq->virtq) { ret = mlx5_vdpa_virtq_stop(virtq->priv, virtq->index); if (ret) @@ -336,21 +340,34 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) virtq->priv = priv; rte_write32(virtq->index, priv->virtq_db_addr); /* Setup doorbell mapping. */ - virtq->intr_handle.fd = vq.kickfd; - if (virtq->intr_handle.fd == -1) { + virtq->intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + true); + if (!virtq->intr_handle) { + DRV_LOG(ERR, "Fail to allocate intr_handle"); + goto error; + } + + if (rte_intr_handle_fd_set(virtq->intr_handle, vq.kickfd)) + goto error; + + if (rte_intr_handle_fd_get(virtq->intr_handle) == -1) { DRV_LOG(WARNING, "Virtq %d kickfd is invalid.", index); } else { - virtq->intr_handle.type = RTE_INTR_HANDLE_EXT; - if (rte_intr_callback_register(&virtq->intr_handle, + if (rte_intr_handle_type_set(virtq->intr_handle, + RTE_INTR_HANDLE_EXT)) + goto error; + if (rte_intr_callback_register(virtq->intr_handle, mlx5_vdpa_virtq_handler, virtq)) { - virtq->intr_handle.fd = -1; + rte_intr_handle_fd_set(virtq->intr_handle, -1); DRV_LOG(ERR, "Failed to register virtq %d interrupt.", index); goto error; } else { DRV_LOG(DEBUG, "Register fd %d interrupt for virtq %d.", - virtq->intr_handle.fd, index); + rte_intr_handle_fd_get(virtq->intr_handle), + index); } } /* Subscribe virtq error event. */ @@ -501,7 +518,8 @@ mlx5_vdpa_virtq_is_modified(struct mlx5_vdpa_priv *priv, if (ret) return -1; - if (vq.size != virtq->vq_size || vq.kickfd != virtq->intr_handle.fd) + if (vq.size != virtq->vq_size || vq.kickfd != + rte_intr_handle_fd_get(virtq->intr_handle)) return 1; if (virtq->eqp.cq.cq_obj.cq) { if (vq.callfd != virtq->eqp.cq.callfd) diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c index fc37236195..fdc9aeb894 100644 --- a/lib/bbdev/rte_bbdev.c +++ b/lib/bbdev/rte_bbdev.c @@ -1093,7 +1093,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op, VALID_QUEUE_OR_RET_ERR(queue_id, dev); intr_handle = dev->intr_handle; - if (!intr_handle || !intr_handle->intr_vec) { + if (!intr_handle || !rte_intr_handle_vec_list_base(intr_handle)) { rte_bbdev_log(ERR, "Device %u intr handle unset\n", dev_id); return -ENOTSUP; } @@ -1104,7 +1104,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op, return -ENOTSUP; } - vec = intr_handle->intr_vec[queue_id]; + vec = rte_intr_handle_vec_list_index_get(intr_handle, queue_id); ret = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data); if (ret && (ret != -EEXIST)) { rte_bbdev_log(ERR, diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c index c38b2e04f8..b4a0dd533f 100644 --- a/lib/eal/freebsd/eal_alarm.c +++ b/lib/eal/freebsd/eal_alarm.c @@ -32,7 +32,7 @@ struct alarm_entry { LIST_ENTRY(alarm_entry) next; - struct rte_intr_handle handle; + struct rte_intr_handle *handle; struct timespec time; rte_eal_alarm_callback cb_fn; void *cb_arg; @@ -43,22 +43,45 @@ struct alarm_entry { static LIST_HEAD(alarm_list, alarm_entry) alarm_list = LIST_HEAD_INITIALIZER(); static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER; -static struct rte_intr_handle intr_handle = {.fd = -1 }; +static struct rte_intr_handle *intr_handle; static void eal_alarm_callback(void *arg); int rte_eal_alarm_init(void) { - intr_handle.type = RTE_INTR_HANDLE_ALARM; + int fd; + + intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + false); + if (!intr_handle) { + RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n"); + goto error; + } + + if (rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_ALARM)) + goto error; + + if (rte_intr_handle_fd_set(intr_handle, -1)) + goto error; /* on FreeBSD, timers don't use fd's, and their identifiers are stored * in separate namespace from fd's, so using any value is OK. however, * EAL interrupts handler expects fd's to be unique, so use an actual fd * to guarantee unique timer identifier. */ - intr_handle.fd = open("/dev/zero", O_RDONLY); + fd = open("/dev/zero", O_RDONLY); + + if (rte_intr_handle_fd_set(intr_handle, fd)) + goto error; return 0; +error: + if (intr_handle) + rte_intr_handle_instance_free(intr_handle); + + rte_intr_handle_fd_set(intr_handle, -1); + return -1; } static inline int @@ -118,7 +141,7 @@ unregister_current_callback(void) ap = LIST_FIRST(&alarm_list); do { - ret = rte_intr_callback_unregister(&intr_handle, + ret = rte_intr_callback_unregister(intr_handle, eal_alarm_callback, &ap->time); } while (ret == -EAGAIN); } @@ -136,7 +159,7 @@ register_first_callback(void) ap = LIST_FIRST(&alarm_list); /* register a new callback */ - ret = rte_intr_callback_register(&intr_handle, + ret = rte_intr_callback_register(intr_handle, eal_alarm_callback, &ap->time); } return ret; @@ -164,6 +187,8 @@ eal_alarm_callback(void *arg __rte_unused) rte_spinlock_lock(&alarm_list_lk); LIST_REMOVE(ap, next); + if (ap->handle) + rte_intr_handle_instance_free(ap->handle); free(ap); ap = LIST_FIRST(&alarm_list); @@ -202,6 +227,12 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg) new_alarm->time.tv_nsec = (now.tv_nsec + ns) % NS_PER_S; new_alarm->time.tv_sec = now.tv_sec + ((now.tv_nsec + ns) / NS_PER_S); + new_alarm->handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + false); + if (new_alarm->handle == NULL) + return -ENOMEM; + rte_spinlock_lock(&alarm_list_lk); if (LIST_EMPTY(&alarm_list)) @@ -256,6 +287,9 @@ rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg) if (ap->executing == 0) { LIST_REMOVE(ap, next); free(ap); + if (ap->handle) + rte_intr_handle_instance_free( + ap->handle); count++; } else { /* If calling from other context, mark that @@ -282,6 +316,9 @@ rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg) cb_arg == ap->cb_arg)) { if (ap->executing == 0) { LIST_REMOVE(ap, next); + if (ap->handle) + rte_intr_handle_instance_free( + ap->handle); free(ap); count++; ap = ap_prev; diff --git a/lib/eal/include/rte_eal_trace.h b/lib/eal/include/rte_eal_trace.h index 495ae1ee1d..792872dffd 100644 --- a/lib/eal/include/rte_eal_trace.h +++ b/lib/eal/include/rte_eal_trace.h @@ -149,11 +149,7 @@ RTE_TRACE_POINT( RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle, rte_intr_callback_fn cb, void *cb_arg, int rc), rte_trace_point_emit_int(rc); - rte_trace_point_emit_int(handle->vfio_dev_fd); - rte_trace_point_emit_int(handle->fd); - rte_trace_point_emit_int(handle->type); - rte_trace_point_emit_u32(handle->max_intr); - rte_trace_point_emit_u32(handle->nb_efd); + rte_trace_point_emit_ptr(handle); rte_trace_point_emit_ptr(cb); rte_trace_point_emit_ptr(cb_arg); ) @@ -162,11 +158,7 @@ RTE_TRACE_POINT( RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle, rte_intr_callback_fn cb, void *cb_arg, int rc), rte_trace_point_emit_int(rc); - rte_trace_point_emit_int(handle->vfio_dev_fd); - rte_trace_point_emit_int(handle->fd); - rte_trace_point_emit_int(handle->type); - rte_trace_point_emit_u32(handle->max_intr); - rte_trace_point_emit_u32(handle->nb_efd); + rte_trace_point_emit_ptr(handle); rte_trace_point_emit_ptr(cb); rte_trace_point_emit_ptr(cb_arg); ) @@ -174,21 +166,13 @@ RTE_TRACE_POINT( rte_eal_trace_intr_enable, RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle, int rc), rte_trace_point_emit_int(rc); - rte_trace_point_emit_int(handle->vfio_dev_fd); - rte_trace_point_emit_int(handle->fd); - rte_trace_point_emit_int(handle->type); - rte_trace_point_emit_u32(handle->max_intr); - rte_trace_point_emit_u32(handle->nb_efd); + rte_trace_point_emit_ptr(handle); ) RTE_TRACE_POINT( rte_eal_trace_intr_disable, RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle, int rc), rte_trace_point_emit_int(rc); - rte_trace_point_emit_int(handle->vfio_dev_fd); - rte_trace_point_emit_int(handle->fd); - rte_trace_point_emit_int(handle->type); - rte_trace_point_emit_u32(handle->max_intr); - rte_trace_point_emit_u32(handle->nb_efd); + rte_trace_point_emit_ptr(handle); ) /* Memory */ diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c index 3252c6fa59..e959fba27b 100644 --- a/lib/eal/linux/eal_alarm.c +++ b/lib/eal/linux/eal_alarm.c @@ -54,22 +54,37 @@ struct alarm_entry { static LIST_HEAD(alarm_list, alarm_entry) alarm_list = LIST_HEAD_INITIALIZER(); static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER; -static struct rte_intr_handle intr_handle = {.fd = -1 }; +static struct rte_intr_handle *intr_handle; static int handler_registered = 0; static void eal_alarm_callback(void *arg); int rte_eal_alarm_init(void) { - intr_handle.type = RTE_INTR_HANDLE_ALARM; + + intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + false); + if (!intr_handle) { + RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n"); + goto error; + } + + rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_ALARM); + /* create a timerfd file descriptor */ - intr_handle.fd = timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK); - if (intr_handle.fd == -1) + if (rte_intr_handle_fd_set(intr_handle, + timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK))) goto error; + if (rte_intr_handle_fd_get(intr_handle) == -1) + goto error; return 0; error: + if (intr_handle) + rte_intr_handle_instance_free(intr_handle); + rte_errno = errno; return -1; } @@ -109,7 +124,8 @@ eal_alarm_callback(void *arg __rte_unused) atime.it_value.tv_sec -= now.tv_sec; atime.it_value.tv_nsec -= now.tv_nsec; - timerfd_settime(intr_handle.fd, 0, &atime, NULL); + timerfd_settime(rte_intr_handle_fd_get(intr_handle), 0, &atime, + NULL); } rte_spinlock_unlock(&alarm_list_lk); } @@ -140,7 +156,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg) rte_spinlock_lock(&alarm_list_lk); if (!handler_registered) { /* registration can fail, callback can be registered later */ - if (rte_intr_callback_register(&intr_handle, + if (rte_intr_callback_register(intr_handle, eal_alarm_callback, NULL) == 0) handler_registered = 1; } @@ -170,7 +186,8 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg) .tv_nsec = (us % US_PER_S) * NS_PER_US, }, }; - ret |= timerfd_settime(intr_handle.fd, 0, &alarm_time, NULL); + ret |= timerfd_settime(rte_intr_handle_fd_get(intr_handle), 0, + &alarm_time, NULL); } rte_spinlock_unlock(&alarm_list_lk); diff --git a/lib/eal/linux/eal_dev.c b/lib/eal/linux/eal_dev.c index 3b905e18f5..14d693cd88 100644 --- a/lib/eal/linux/eal_dev.c +++ b/lib/eal/linux/eal_dev.c @@ -23,10 +23,7 @@ #include "eal_private.h" -static struct rte_intr_handle intr_handle = { - .type = RTE_INTR_HANDLE_DEV_EVENT, - .fd = -1, -}; +static struct rte_intr_handle *intr_handle; static rte_rwlock_t monitor_lock = RTE_RWLOCK_INITIALIZER; static uint32_t monitor_refcount; static bool hotplug_handle; @@ -109,12 +106,11 @@ static int dev_uev_socket_fd_create(void) { struct sockaddr_nl addr; - int ret; + int ret, fd; - intr_handle.fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC | - SOCK_NONBLOCK, - NETLINK_KOBJECT_UEVENT); - if (intr_handle.fd < 0) { + fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC | SOCK_NONBLOCK, + NETLINK_KOBJECT_UEVENT); + if (fd < 0) { RTE_LOG(ERR, EAL, "create uevent fd failed.\n"); return -1; } @@ -124,16 +120,19 @@ dev_uev_socket_fd_create(void) addr.nl_pid = 0; addr.nl_groups = 0xffffffff; - ret = bind(intr_handle.fd, (struct sockaddr *) &addr, sizeof(addr)); + ret = bind(fd, (struct sockaddr *) &addr, sizeof(addr)); if (ret < 0) { RTE_LOG(ERR, EAL, "Failed to bind uevent socket.\n"); goto err; } + if (rte_intr_handle_fd_set(intr_handle, fd)) + goto err; + return 0; err: - close(intr_handle.fd); - intr_handle.fd = -1; + close(fd); + fd = -1; return ret; } @@ -217,9 +216,9 @@ dev_uev_parse(const char *buf, struct rte_dev_event *event, int length) static void dev_delayed_unregister(void *param) { - rte_intr_callback_unregister(&intr_handle, dev_uev_handler, param); - close(intr_handle.fd); - intr_handle.fd = -1; + rte_intr_callback_unregister(intr_handle, dev_uev_handler, param); + close(rte_intr_handle_fd_get(intr_handle)); + rte_intr_handle_fd_set(intr_handle, -1); } static void @@ -235,7 +234,8 @@ dev_uev_handler(__rte_unused void *param) memset(&uevent, 0, sizeof(struct rte_dev_event)); memset(buf, 0, EAL_UEV_MSG_LEN); - ret = recv(intr_handle.fd, buf, EAL_UEV_MSG_LEN, MSG_DONTWAIT); + ret = recv(rte_intr_handle_fd_get(intr_handle), buf, EAL_UEV_MSG_LEN, + MSG_DONTWAIT); if (ret < 0 && errno == EAGAIN) return; else if (ret <= 0) { @@ -311,24 +311,40 @@ rte_dev_event_monitor_start(void) goto exit; } + intr_handle = + rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE, + false); + if (!intr_handle) { + RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n"); + goto exit; + } + + if (rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_DEV_EVENT)) + goto exit; + + if (rte_intr_handle_fd_set(intr_handle, -1)) + goto exit; + ret = dev_uev_socket_fd_create(); if (ret) { RTE_LOG(ERR, EAL, "error create device event fd.\n"); goto exit; } - ret = rte_intr_callback_register(&intr_handle, dev_uev_handler, NULL); + ret = rte_intr_callback_register(intr_handle, dev_uev_handler, NULL); if (ret) { - RTE_LOG(ERR, EAL, "fail to register uevent callback.\n"); - close(intr_handle.fd); - intr_handle.fd = -1; + close(rte_intr_handle_fd_get(intr_handle)); goto exit; } monitor_refcount++; exit: + if (intr_handle) { + rte_intr_handle_fd_set(intr_handle, -1); + rte_intr_handle_instance_free(intr_handle); + } rte_rwlock_write_unlock(&monitor_lock); return ret; } @@ -350,15 +366,18 @@ rte_dev_event_monitor_stop(void) goto exit; } - ret = rte_intr_callback_unregister(&intr_handle, dev_uev_handler, + ret = rte_intr_callback_unregister(intr_handle, dev_uev_handler, (void *)-1); if (ret < 0) { RTE_LOG(ERR, EAL, "fail to unregister uevent callback.\n"); goto exit; } - close(intr_handle.fd); - intr_handle.fd = -1; + close(rte_intr_handle_fd_get(intr_handle)); + rte_intr_handle_fd_set(intr_handle, -1); + + if (intr_handle) + rte_intr_handle_instance_free(intr_handle); monitor_refcount--; diff --git a/lib/ethdev/ethdev_pci.h b/lib/ethdev/ethdev_pci.h index 8edca82ce8..eff072ac16 100644 --- a/lib/ethdev/ethdev_pci.h +++ b/lib/ethdev/ethdev_pci.h @@ -32,7 +32,7 @@ rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev, return; } - eth_dev->intr_handle = &pci_dev->intr_handle; + eth_dev->intr_handle = pci_dev->intr_handle; if (rte_eal_process_type() == RTE_PROC_PRIMARY) { eth_dev->data->dev_flags = 0; diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index daf5ca9242..1f1a0291b6 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -4777,13 +4777,13 @@ rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data) } intr_handle = dev->intr_handle; - if (!intr_handle->intr_vec) { + if (!rte_intr_handle_vec_list_base(intr_handle)) { RTE_ETHDEV_LOG(ERR, "RX Intr vector unset\n"); return -EPERM; } for (qid = 0; qid < dev->data->nb_rx_queues; qid++) { - vec = intr_handle->intr_vec[qid]; + vec = rte_intr_handle_vec_list_index_get(intr_handle, qid); rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data); if (rc && rc != -EEXIST) { RTE_ETHDEV_LOG(ERR, @@ -4818,15 +4818,15 @@ rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id) } intr_handle = dev->intr_handle; - if (!intr_handle->intr_vec) { + if (!rte_intr_handle_vec_list_base(intr_handle)) { RTE_ETHDEV_LOG(ERR, "RX Intr vector unset\n"); return -1; } - vec = intr_handle->intr_vec[queue_id]; + vec = rte_intr_handle_vec_list_index_get(intr_handle, queue_id); efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ? (vec - RTE_INTR_VEC_RXTX_OFFSET) : vec; - fd = intr_handle->efds[efd_idx]; + fd = rte_intr_handle_efds_index_get(intr_handle, efd_idx); return fd; } @@ -5004,12 +5004,12 @@ rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id, } intr_handle = dev->intr_handle; - if (!intr_handle->intr_vec) { + if (!rte_intr_handle_vec_list_base(intr_handle)) { RTE_ETHDEV_LOG(ERR, "RX Intr vector unset\n"); return -EPERM; } - vec = intr_handle->intr_vec[queue_id]; + vec = rte_intr_handle_vec_list_index_get(intr_handle, queue_id); rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data); if (rc && rc != -EEXIST) { RTE_ETHDEV_LOG(ERR, From patchwork Fri Sep 3 12:41:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 97935 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B65ADA0C54; Fri, 3 Sep 2021 14:42:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A53B5410DC; Fri, 3 Sep 2021 14:42:32 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 5297F40E3C for ; Fri, 3 Sep 2021 14:42:31 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 1837tMpt012027; Fri, 3 Sep 2021 05:42:30 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=veU8F+KNPOa8eUCuLRnEJSdTCC2EzK5thxPz+nsSEPs=; b=HO+zhQQ9axhcmEMZ7w17PyQOpOmN34tJJQnvLvHEPDdJdzLLP03j7j09Ek5sYRuDzY8X fjKceIYkM10SqZvvDCejD+kZ9UsZrS2f5IZvTSI8zv8BJVWDOAOPTtTkjGzjfVFSCmFz uXWTbCIt8tm4GQpXmMZHaa2SeOuyUVzixR/5sTSTZQZK7kfCrdnfDcGqewlsNIzsQYbJ gMxByt+AXb8q2DjYI+ZWNHoHSHiO9TXOeF5dAytw0IjRMj/NeHWEzUKEe+PwjdafB8pE Py3uw+S1q/HA809DMes+wKOlBI6BqxOkQq5gmhwtKaWDD1fLfefQSFFM+cJfkWX25CpO mQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 3aufr890w8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 03 Sep 2021 05:42:30 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 3 Sep 2021 05:42:28 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 3 Sep 2021 05:42:28 -0700 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 5FD435B6942; Fri, 3 Sep 2021 05:42:27 -0700 (PDT) From: Harman Kalra To: , Anatoly Burakov , Harman Kalra Date: Fri, 3 Sep 2021 18:11:01 +0530 Message-ID: <20210903124102.47425-7-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20210903124102.47425-1-hkalra@marvell.com> References: <20210826145726.102081-1-hkalra@marvell.com> <20210903124102.47425-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: DfbgGWheBvX2hHb29tiMi48_OUzz1yF1 X-Proofpoint-GUID: DfbgGWheBvX2hHb29tiMi48_OUzz1yF1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-03_03,2021-09-03_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH v1 6/7] eal/interrupts: make interrupt handle structure opaque X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Moving interrupt handle structure definition inside the c file to make its fields totally opaque to the outside world. Dynamically allocating the efds and elist array os intr_handle structure, based on size provided by user. Eg size can be MSIX interrupts supported by a PCI device. Signed-off-by: Harman Kalra --- drivers/bus/pci/linux/pci_vfio.c | 7 + lib/eal/common/eal_common_interrupts.c | 172 ++++++++++++++++++++++++- lib/eal/include/meson.build | 1 - lib/eal/include/rte_eal_interrupts.h | 72 ----------- lib/eal/include/rte_interrupts.h | 24 +++- 5 files changed, 196 insertions(+), 80 deletions(-) delete mode 100644 lib/eal/include/rte_eal_interrupts.h diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c index f920163580..6af8279189 100644 --- a/drivers/bus/pci/linux/pci_vfio.c +++ b/drivers/bus/pci/linux/pci_vfio.c @@ -266,6 +266,13 @@ pci_vfio_setup_interrupts(struct rte_pci_device *dev, int vfio_dev_fd) return -1; } + /* Reallocate the efds and elist fields of intr_handle based + * on PCI device MSIX size. + */ + if (rte_intr_handle_event_list_update(dev->intr_handle, + irq.count)) + return -1; + /* if this vector cannot be used with eventfd, fail if we explicitly * specified interrupt type, otherwise continue */ if ((irq.flags & VFIO_IRQ_INFO_EVENTFD) == 0) { diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c index 2e4fed96f0..caddf9b0ad 100644 --- a/lib/eal/common/eal_common_interrupts.c +++ b/lib/eal/common/eal_common_interrupts.c @@ -11,6 +11,29 @@ #include +struct rte_intr_handle { + RTE_STD_C11 + union { + struct { + /** VFIO/UIO cfg device file descriptor */ + int dev_fd; + int fd; /**< interrupt event file descriptor */ + }; + void *handle; /**< device driver handle (Windows) */ + }; + bool alloc_from_hugepage; + enum rte_intr_handle_type type; /**< handle type */ + uint32_t max_intr; /**< max interrupt requested */ + uint32_t nb_efd; /**< number of available efd(event fd) */ + uint8_t efd_counter_size; /**< size of efd counter, used for vdev */ + uint16_t nb_intr; + /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */ + int *efds; /**< intr vectors/efds mapping */ + struct rte_epoll_event *elist; /**< intr vector epoll event */ + uint16_t vec_list_size; + int *intr_vec; /**< intr vector number array */ +}; + struct rte_intr_handle *rte_intr_handle_instance_alloc(int size, bool from_hugepage) @@ -31,11 +54,40 @@ struct rte_intr_handle *rte_intr_handle_instance_alloc(int size, } for (i = 0; i < size; i++) { + if (from_hugepage) + intr_handle[i].efds = rte_zmalloc(NULL, + RTE_MAX_RXTX_INTR_VEC_ID * sizeof(uint32_t), 0); + else + intr_handle[i].efds = calloc(1, + RTE_MAX_RXTX_INTR_VEC_ID * sizeof(uint32_t)); + if (!intr_handle[i].efds) { + RTE_LOG(ERR, EAL, "Fail to allocate event fd list\n"); + rte_errno = ENOMEM; + goto fail; + } + + if (from_hugepage) + intr_handle[i].elist = rte_zmalloc(NULL, + RTE_MAX_RXTX_INTR_VEC_ID * + sizeof(struct rte_epoll_event), 0); + else + intr_handle[i].elist = calloc(1, + RTE_MAX_RXTX_INTR_VEC_ID * + sizeof(struct rte_epoll_event)); + if (!intr_handle[i].elist) { + RTE_LOG(ERR, EAL, "fail to allocate event fd list\n"); + rte_errno = ENOMEM; + goto fail; + } intr_handle[i].nb_intr = RTE_MAX_RXTX_INTR_VEC_ID; intr_handle[i].alloc_from_hugepage = from_hugepage; } return intr_handle; +fail: + free(intr_handle->efds); + free(intr_handle); + return NULL; } struct rte_intr_handle *rte_intr_handle_instance_index_get( @@ -73,12 +125,48 @@ int rte_intr_handle_instance_index_set(struct rte_intr_handle *intr_handle, } intr_handle[index].fd = src->fd; - intr_handle[index].vfio_dev_fd = src->vfio_dev_fd; + intr_handle[index].dev_fd = src->dev_fd; + intr_handle[index].type = src->type; intr_handle[index].max_intr = src->max_intr; intr_handle[index].nb_efd = src->nb_efd; intr_handle[index].efd_counter_size = src->efd_counter_size; + if (intr_handle[index].nb_intr != src->nb_intr) { + if (src->alloc_from_hugepage) + intr_handle[index].efds = + rte_realloc(intr_handle[index].efds, + src->nb_intr * + sizeof(uint32_t), 0); + else + intr_handle[index].efds = + realloc(intr_handle[index].efds, + src->nb_intr * sizeof(uint32_t)); + if (intr_handle[index].efds == NULL) { + RTE_LOG(ERR, EAL, "Failed to realloc the efds list"); + rte_errno = ENOMEM; + goto fail; + } + + if (src->alloc_from_hugepage) + intr_handle[index].elist = + rte_realloc(intr_handle[index].elist, + src->nb_intr * + sizeof(struct rte_epoll_event), 0); + else + intr_handle[index].elist = + realloc(intr_handle[index].elist, + src->nb_intr * + sizeof(struct rte_epoll_event)); + if (intr_handle[index].elist == NULL) { + RTE_LOG(ERR, EAL, "Failed to realloc the event list"); + rte_errno = ENOMEM; + goto fail; + } + + intr_handle[index].nb_intr = src->nb_intr; + } + memcpy(intr_handle[index].efds, src->efds, src->nb_intr); memcpy(intr_handle[index].elist, src->elist, src->nb_intr); @@ -87,6 +175,45 @@ int rte_intr_handle_instance_index_set(struct rte_intr_handle *intr_handle, return rte_errno; } +int rte_intr_handle_event_list_update(struct rte_intr_handle *intr_handle, + int size) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (size == 0) { + RTE_LOG(ERR, EAL, "Size can't be zero\n"); + rte_errno = EINVAL; + goto fail; + } + + intr_handle->efds = realloc(intr_handle->efds, + size * sizeof(uint32_t)); + if (intr_handle->efds == NULL) { + RTE_LOG(ERR, EAL, "Failed to realloc the efds list"); + rte_errno = ENOMEM; + goto fail; + } + + intr_handle->elist = realloc(intr_handle->elist, + size * sizeof(struct rte_epoll_event)); + if (intr_handle->elist == NULL) { + RTE_LOG(ERR, EAL, "Failed to realloc the event list"); + rte_errno = ENOMEM; + goto fail; + } + + intr_handle->nb_intr = size; + + return 0; +fail: + return rte_errno; +} + + void rte_intr_handle_instance_free(struct rte_intr_handle *intr_handle) { if (intr_handle == NULL) { @@ -94,10 +221,15 @@ void rte_intr_handle_instance_free(struct rte_intr_handle *intr_handle) rte_errno = ENOTSUP; } - if (intr_handle->alloc_from_hugepage) + if (intr_handle->alloc_from_hugepage) { + rte_free(intr_handle->efds); + rte_free(intr_handle->elist); rte_free(intr_handle); - else + } else { + free(intr_handle->efds); + free(intr_handle->elist); free(intr_handle); + } } int rte_intr_handle_fd_set(struct rte_intr_handle *intr_handle, int fd) @@ -164,7 +296,7 @@ int rte_intr_handle_dev_fd_set(struct rte_intr_handle *intr_handle, int fd) goto fail; } - intr_handle->vfio_dev_fd = fd; + intr_handle->dev_fd = fd; return 0; fail: @@ -179,7 +311,7 @@ int rte_intr_handle_dev_fd_get(const struct rte_intr_handle *intr_handle) goto fail; } - return intr_handle->vfio_dev_fd; + return intr_handle->dev_fd; fail: return rte_errno; } @@ -300,6 +432,12 @@ int *rte_intr_handle_efds_base(struct rte_intr_handle *intr_handle) goto fail; } + if (!intr_handle->efds) { + RTE_LOG(ERR, EAL, "Event fd list not allocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + return intr_handle->efds; fail: return NULL; @@ -314,6 +452,12 @@ int rte_intr_handle_efds_index_get(const struct rte_intr_handle *intr_handle, goto fail; } + if (!intr_handle->efds) { + RTE_LOG(ERR, EAL, "Event fd list not allocated\n"); + rte_errno = EFAULT; + goto fail; + } + if (index >= intr_handle->nb_intr) { RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index, intr_handle->nb_intr); @@ -335,6 +479,12 @@ int rte_intr_handle_efds_index_set(struct rte_intr_handle *intr_handle, goto fail; } + if (!intr_handle->efds) { + RTE_LOG(ERR, EAL, "Event fd list not allocated\n"); + rte_errno = EFAULT; + goto fail; + } + if (index >= intr_handle->nb_intr) { RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index, intr_handle->nb_intr); @@ -358,6 +508,12 @@ struct rte_epoll_event *rte_intr_handle_elist_index_get( goto fail; } + if (!intr_handle->elist) { + RTE_LOG(ERR, EAL, "Event list not allocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + if (index >= intr_handle->nb_intr) { RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index, intr_handle->nb_intr); @@ -379,6 +535,12 @@ int rte_intr_handle_elist_index_set(struct rte_intr_handle *intr_handle, goto fail; } + if (!intr_handle->elist) { + RTE_LOG(ERR, EAL, "Event list not allocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + if (index >= intr_handle->nb_intr) { RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index, intr_handle->nb_intr); diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build index 8e258607b8..86468d1a2b 100644 --- a/lib/eal/include/meson.build +++ b/lib/eal/include/meson.build @@ -49,7 +49,6 @@ headers += files( 'rte_version.h', 'rte_vfio.h', ) -indirect_headers += files('rte_eal_interrupts.h') # special case install the generic headers, since they go in a subdir generic_headers = files( diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h deleted file mode 100644 index 216aece61b..0000000000 --- a/lib/eal/include/rte_eal_interrupts.h +++ /dev/null @@ -1,72 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2010-2014 Intel Corporation - */ - -#ifndef _RTE_INTERRUPTS_H_ -#error "don't include this file directly, please include generic " -#endif - -/** - * @file rte_eal_interrupts.h - * @internal - * - * Contains function prototypes exposed by the EAL for interrupt handling by - * drivers and other DPDK internal consumers. - */ - -#ifndef _RTE_EAL_INTERRUPTS_H_ -#define _RTE_EAL_INTERRUPTS_H_ - -#define RTE_MAX_RXTX_INTR_VEC_ID 512 -#define RTE_INTR_VEC_ZERO_OFFSET 0 -#define RTE_INTR_VEC_RXTX_OFFSET 1 - -/** - * The interrupt source type, e.g. UIO, VFIO, ALARM etc. - */ -enum rte_intr_handle_type { - RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */ - RTE_INTR_HANDLE_UIO, /**< uio device handle */ - RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */ - RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy) */ - RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */ - RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */ - RTE_INTR_HANDLE_ALARM, /**< alarm handle */ - RTE_INTR_HANDLE_EXT, /**< external handler */ - RTE_INTR_HANDLE_VDEV, /**< virtual device */ - RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */ - RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */ - RTE_INTR_HANDLE_MAX /**< count of elements */ -}; - -/** Handle for interrupts. */ -struct rte_intr_handle { - RTE_STD_C11 - union { - struct { - RTE_STD_C11 - union { - /** VFIO device file descriptor */ - int vfio_dev_fd; - /** UIO cfg file desc for uio_pci_generic */ - int uio_cfg_fd; - }; - int fd; /**< interrupt event file descriptor */ - }; - void *handle; /**< device driver handle (Windows) */ - }; - bool alloc_from_hugepage; - enum rte_intr_handle_type type; /**< handle type */ - uint32_t max_intr; /**< max interrupt requested */ - uint32_t nb_efd; /**< number of available efd(event fd) */ - uint8_t efd_counter_size; /**< size of efd counter, used for vdev */ - uint16_t nb_intr; - /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */ - int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */ - struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID]; - /**< intr vector epoll event */ - uint16_t vec_list_size; - int *intr_vec; /**< intr vector number array */ -}; - -#endif /* _RTE_EAL_INTERRUPTS_H_ */ diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h index afc3262967..7dfb849eea 100644 --- a/lib/eal/include/rte_interrupts.h +++ b/lib/eal/include/rte_interrupts.h @@ -25,9 +25,29 @@ extern "C" { /** Interrupt handle */ struct rte_intr_handle; -#define RTE_INTR_HANDLE_DEFAULT_SIZE 1 +#define RTE_MAX_RXTX_INTR_VEC_ID 512 +#define RTE_INTR_VEC_ZERO_OFFSET 0 +#define RTE_INTR_VEC_RXTX_OFFSET 1 + +/** + * The interrupt source type, e.g. UIO, VFIO, ALARM etc. + */ +enum rte_intr_handle_type { + RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */ + RTE_INTR_HANDLE_UIO, /**< uio device handle */ + RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */ + RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy) */ + RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */ + RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */ + RTE_INTR_HANDLE_ALARM, /**< alarm handle */ + RTE_INTR_HANDLE_EXT, /**< external handler */ + RTE_INTR_HANDLE_VDEV, /**< virtual device */ + RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */ + RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */ + RTE_INTR_HANDLE_MAX /**< count of elements */ +}; -#include "rte_eal_interrupts.h" +#define RTE_INTR_HANDLE_DEFAULT_SIZE 1 /** Function to be registered for the specific interrupt */ typedef void (*rte_intr_callback_fn)(void *cb_arg); From patchwork Fri Sep 3 12:41:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 97936 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5BA50A0C54; Fri, 3 Sep 2021 14:42:39 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3180041100; Fri, 3 Sep 2021 14:42:36 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 0156D40E3C for ; Fri, 3 Sep 2021 14:42:34 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 1837tQFA012099; Fri, 3 Sep 2021 05:42:34 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=YqO4kVDmMV1IeE5RmoF5Tgy6DCvPtpvYjXNFUWPEtnw=; b=Kb6lHLf8XB7rB6zFZzhqDNClN9EHaJaF2+6NzJBgWSzBSZIKIc+S9aUoRZ26TsPwWly1 xNlwuL6iJkxSxixtzxwcj/t9A8Xm5TnND68sulUpuD7mSvLTaf1sorBxB3OW6iyy1HwB aApepaTpSP+9NZoASvV0NcuIfxOdhdSx0JhtMCMbm5xns7vHgf3ncG6NbFHEMM3fuxLK /2+Db0vW4lCZqodyFIIGj1zvLrBQPiLwF3EfV6ao2ocu3nE9rADa8sD5l6T/lu69fjIL DfI0c5Eu6w5tTjzkRMrv3CxHNgir9lyQ19qujNP06TokwGQgu78JWGjKBX3wFdF37gKm mw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 3aufr890wd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 03 Sep 2021 05:42:34 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 3 Sep 2021 05:42:32 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 3 Sep 2021 05:42:32 -0700 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 0FA975B6942; Fri, 3 Sep 2021 05:42:30 -0700 (PDT) From: Harman Kalra To: , Bruce Richardson CC: Harman Kalra Date: Fri, 3 Sep 2021 18:11:02 +0530 Message-ID: <20210903124102.47425-8-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20210903124102.47425-1-hkalra@marvell.com> References: <20210826145726.102081-1-hkalra@marvell.com> <20210903124102.47425-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 9iQNAZKiH3JIEJLcEBfNgDaWkpHOQ0qS X-Proofpoint-GUID: 9iQNAZKiH3JIEJLcEBfNgDaWkpHOQ0qS X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-03_03,2021-09-03_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH v1 7/7] eal/alarm: introduce alarm fini routine X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Implementing alarm cleanup routine, where the memory allocated for interrupt instance can be freed. Signed-off-by: Harman Kalra --- lib/eal/common/eal_private.h | 11 +++++++++++ lib/eal/freebsd/eal.c | 1 + lib/eal/freebsd/eal_alarm.c | 7 +++++++ lib/eal/linux/eal.c | 1 + lib/eal/linux/eal_alarm.c | 10 +++++++++- 5 files changed, 29 insertions(+), 1 deletion(-) diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h index 64cf4e81c8..ed429dec9d 100644 --- a/lib/eal/common/eal_private.h +++ b/lib/eal/common/eal_private.h @@ -162,6 +162,17 @@ int rte_eal_intr_init(void); */ int rte_eal_alarm_init(void); +/** + * Init alarm mechanism. This is to allow a callback be called after + * specific time. + * + * This function is private to EAL. + * + * @return + * 0 on success, negative on error + */ +void rte_eal_alarm_fini(void); + /** * Function is to check if the kernel module(like, vfio, vfio_iommu_type1, * etc.) loaded. diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c index 6cee5ae369..7efead4f48 100644 --- a/lib/eal/freebsd/eal.c +++ b/lib/eal/freebsd/eal.c @@ -973,6 +973,7 @@ rte_eal_cleanup(void) rte_eal_memory_detach(); rte_trace_save(); eal_trace_fini(); + rte_eal_alarm_fini(); eal_cleanup_config(internal_conf); return 0; } diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c index b4a0dd533f..13c81518ed 100644 --- a/lib/eal/freebsd/eal_alarm.c +++ b/lib/eal/freebsd/eal_alarm.c @@ -46,6 +46,13 @@ static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER; static struct rte_intr_handle *intr_handle; static void eal_alarm_callback(void *arg); +void +rte_eal_alarm_fini(void) +{ + if (intr_handle) + rte_intr_handle_instance_free(intr_handle); +} + int rte_eal_alarm_init(void) { diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c index 3577eaeaa4..5c8af85ad5 100644 --- a/lib/eal/linux/eal.c +++ b/lib/eal/linux/eal.c @@ -1370,6 +1370,7 @@ rte_eal_cleanup(void) rte_eal_memory_detach(); rte_trace_save(); eal_trace_fini(); + rte_eal_alarm_fini(); eal_cleanup_config(internal_conf); return 0; } diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c index e959fba27b..5dd804f83c 100644 --- a/lib/eal/linux/eal_alarm.c +++ b/lib/eal/linux/eal_alarm.c @@ -58,6 +58,13 @@ static struct rte_intr_handle *intr_handle; static int handler_registered = 0; static void eal_alarm_callback(void *arg); +void +rte_eal_alarm_fini(void) +{ + if (intr_handle) + rte_intr_handle_instance_free(intr_handle); +} + int rte_eal_alarm_init(void) { @@ -70,7 +77,8 @@ rte_eal_alarm_init(void) goto error; } - rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_ALARM); + if (rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_ALARM)) + goto error; /* create a timerfd file descriptor */ if (rte_intr_handle_fd_set(intr_handle,