004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Article (310)
- Doctoral Thesis (153)
- Working Paper (122)
- Conference Proceeding (62)
- Preprint (57)
- Bachelor Thesis (56)
- Diploma Thesis (54)
- Part of a Book (47)
- Contribution to a Periodical (46)
- diplomthesis (25)
Is part of the Bibliography
- no (1001)
Keywords
- Lambda-Kalkül (21)
- Inklusion (13)
- Formale Semantik (11)
- Machine Learning (11)
- Barrierefreiheit (10)
- Digitalisierung (10)
- artificial intelligence (10)
- data science (10)
- machine learning (10)
- Operationale Semantik (9)
Institute
- Informatik (496)
- Informatik und Mathematik (129)
- Präsidium (81)
- Medizin (68)
- Frankfurt Institute for Advanced Studies (FIAS) (60)
- Wirtschaftswissenschaften (54)
- Physik (44)
- Hochschulrechenzentrum (24)
- studiumdigitale (24)
- Mathematik (15)
In dynamical systems with distinct time scales the time evolution in phase space may be influenced strongly by the fixed points of the fast subsystem. Orbits then typically follow these points, performing in addition rapid transitions between distinct branches on the time scale of the fast variables. As the branches guide the dynamics of a system along the manifold of former fixed points, they are considered transiently attracting states and the intermittent transitions between branches correspond to state switching within transient–state dynamics. A full characterization of the set of former fixed points, the critical manifold, tends to be difficult in high–dimensional dynamical systems such as large neural networks. Here we point out that an easily computable subset of the critical manifold, the set of target points, can be used as a reference for the investigation of high–dimensional slow–fast systems. The set of target points corresponds in this context to the adiabatic projection of a given orbit to the critical manifold. Applying our framework to a simple recurrent neural network, we find that the scaling relation of the Euclidean distance between the trajectory and its target points with the control parameter of the slow time scale allows to distinguish an adiabatic regime from a state that is effectively independent from target points.
Graph Neural Networks (GNNs) are a popular class of machine learning models. Inspired by the learning to explain (L2X) paradigm, we propose L2xGnn, a framework for explainable GNNs which provides faithful explanations by design. L2xGnn learns a mechanism for selecting explanatory subgraphs (motifs) which are exclusively used in the GNNs message-passing operations. L2xGnn is able to select, for each input graph, a subgraph with specific properties such as being sparse and connected. Imposing such constraints on the motifs often leads to more interpretable and effective explanations. Experiments on several datasets suggest that L2xGnn achieves the same classification accuracy as baseline methods using the entire input graph while ensuring that only the provided explanations are used to make predictions. Moreover, we show that L2xGnn is able to identify motifs responsible for the graph’s properties it is intended to predict.
Graph Neural Networks (GNNs) are a popular class of machine learning models. Inspired by the learning to explain (L2X) paradigm, we propose L2XGNN, a framework for explainable GNNs which provides faithful explanations by design. L2XGNN learns a mechanism for selecting explanatory subgraphs (motifs) which are exclusively used in the GNNs message-passing operations. L2XGNN is able to select, for each input graph, a subgraph with specific properties such as being sparse and connected. Imposing such constraints on the motifs often leads to more interpretable and effective explanations. Experiments on several datasets suggest that L2XGNN achieves the same classification accuracy as baseline methods using the entire input graph while ensuring that only the provided explanations are used to make predictions. Moreover, we show that L2XGNN is able to identify motifs responsible for the graph's properties it is intended to predict.
Graph Neural Networks (GNNs) are a popular class of machine learning models. Inspired by the learning to explain (L2X) paradigm, we propose L2XGNN, a framework for explainable GNNs which provides faithful explanations by design. L2XGNN learns a mechanism for selecting explanatory subgraphs (motifs) which are exclusively used in the GNNs message-passing operations. L2XGNN is able to select, for each input graph, a subgraph with specific properties such as being sparse and connected. Imposing such constraints on the motifs often leads to more interpretable and effective explanations. Experiments on several datasets suggest that L2XGNN achieves the same classification accuracy as baseline methods using the entire input graph while ensuring that only the provided explanations are used to make predictions. Moreover, we show that L2XGNN is able to identify motifs responsible for the graph's properties it is intended to predict.
Graph Neural Networks (GNNs) are a popular class of machine learning models. Inspired by the learning to explain (L2X) paradigm, we propose L2XGNN, a framework for explainable GNNs which provides faithful explanations by design. L2XGNN learns a mechanism for selecting explanatory subgraphs (motifs) which are exclusively used in the GNNs message-passing operations. L2XGNN is able to select, for each input graph, a subgraph with specific properties such as being sparse and connected. Imposing such constraints on the motifs often leads to more interpretable and effective explanations. Experiments on several datasets suggest that L2XGNN achieves the same classification accuracy as baseline methods using the entire input graph while ensuring that only the provided explanations are used to make predictions. Moreover, we show that L2XGNN is able to identify motifs responsible for the graph's properties it is intended to predict.
Graph Neural Networks (GNNs) are a popular class of machine learning models. Inspired by the learning to explain (L2X) paradigm, we propose L2XGNN, a framework for explainable GNNs which provides faithful explanations by design. L2XGNN learns a mechanism for selecting explanatory subgraphs (motifs) which are exclusively used in the GNNs message-passing operations. L2XGNN is able to select, for each input graph, a subgraph with specific properties such as being sparse and connected. Imposing such constraints on the motifs often leads to more interpretable and effective explanations. Experiments on several datasets suggest that L2XGNN achieves the same classification accuracy as baseline methods using the entire input graph while ensuring that only the provided explanations are used to make predictions. Moreover, we show that L2XGNN is able to identify motifs responsible for the graph's properties it is intended to predict.