TASK-SPECIFIC EXPLAINABILITY OF GRAPH NEURAL NETWORKS TO IMPROVE MODEL PERFORMANCE USING FUNCTIONAL AND STRUCTURAL SIMILARITY OF NEURONS
2024
- 254Usage
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Usage254
- Downloads140
- Abstract Views114
Thesis / Dissertation Description
Graph Neural Networks (GNNs) have emerged as powerful tools for modeling and analyzing graph-structured data, gaining prominence in applications such as social network analysis, recommendation systems, and bioinformatics. Despite their success, the opaque nature of GNNs has raised concerns regarding their interpretability and trustworthiness. This thesis addresses the challenge of explainability in GNNs by proposing novel methodologies that enhance the understanding and performance of these models through task-specific explainability mechanisms.The research introduces techniques that leverage both functional and structural similarity of neurons within GNNs to provide comprehensive explanations. The func- tional similarity is assessed by analyzing gradients, activations, and covariance of neuron responses. Methods such as Neuron Conductance and Integrated Gradients are employed to identify critical neurons and their contributions to the model’s per- formance. Structural similarity is measured by examining the impact of individual neurons on the predictions of specific subgraphs within the GNN. Subgraphs are in- duced by nodes whose predictions change significantly when a neuron is deactivated. These subgraphs are then used to train a contrastive learning model that generates meaningful embeddings, facilitating the clustering of neurons based on their structural roles within the network.The proposed approach shifts the focus from local explanations, which clarify individual predictions, to global explanations that uncover the overarching principles governing GNN behavior. This holistic perspective allows for the fine-tuning of GNN components to enhance performance on specific tasks. Experimental eval- uations demonstrate the effectiveness of the proposed methods, showing significant improvements in model interpretability and task-specific performance.
Bibliographic Details
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know