Interpretable logical-probabilistic approximation of neural networks
Cognitive Systems Research, ISSN: 1389-0417, Vol: 88, Page: 101301
2024
- 6Captures
Metric Options: Counts1 Year3 YearSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Captures6
- Readers6
Article Description
The paper proposes the approximation of DNNs by replacing each neuron by the corresponding logical-probabilistic neuron. Logical-probabilistic neurons learn their behavior based on the responses of initial neurons on incoming signals and discover all logical-probabilistic causal relationships between the input and output. These logical-probabilistic causal relationships are, in a certain sense, most precise – it was proved in the previous works that they are theoretically (when probability is known) can predict without contradictions. The resulting logical-probabilistic neurons are interconnected by the same connections as the initial neurons after replacing their signals on true/false. The resulting logical-probabilistic neural network produces its own predictions that approximate the predictions of the original DNN. Thus, we obtain an interpretable approximation of DNN, which also allows tracing of DNN by tracing its excitations through the causal relationships. This approximation of DNN is a Distillation method such as Model Translation, which train alternative smaller interpretable models that mimics the total input/output behavior of DNN. It is also locally interpretable and explains every particular prediction. It explains the sequences of logical probabilistic causal relationships that infer that prediction and also show all features that took part in this prediction with the statistical estimation of their significance. Experimental results on approximation accuracy of all intermedia neurons, output neurons and softmax output of DNN are presented, as well as the accuracy of obtained logical-probabilistic neural network. From the practical point of view, interpretable transformation of neural networks is very important for the hybrid artificial intelligent systems, where neural networks are integrated with the symbolic methods of AI. As a practical application we consider smart city.
Bibliographic Details
Elsevier BV
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know