Evaluating the Clinical Utility of Artificial Intelligence Assistance and its Explanation on Glioma Grading Task
medRxiv
2022
- 1Mentions
Metric Options: Counts1 Year3 YearSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Mentions1
- News Mentions1
- News1
Most Recent News
Evaluating the Clinical Utility of Artificial Intelligence Assistance and its Explanation on Glioma Grading Task
2022 DEC 22 (NewsRx) -- By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News -- According to news reporting based on
Article Description
Background As a fast-advancing technology, artificial intelligence (AI) has considerable potential to assist physicians in various clinical tasks from disease identification to lesion segmentation. Despite much research, AI has not yet been applied to neuro-oncological imaging in a clinically meaningful way. To bridge the clinical implementation gap of AI in neuro-oncological settings, we conducted a clinical user-based evaluation, analogous to the phase II clinical trial, to evaluate the utility of AI for diagnostic predictions and the value of AI explanations on the glioma grading task. Method Using the publicly-available BraTS dataset, we trained an AI model of 88.0% accuracy on the glioma grading task. We selected the SmoothGrad explainable AI Weina Jin and Mostafa Fatehi are co-first authors. algorithm based on the computational evaluation regarding explanation truthfulness among a candidate of 16 commonly-used algorithms. SmoothGrad could explain the AI model’s prediction using a heatmap overlaid on the MRI to highlight important regions for AI prediction. The evaluation is an online survey wherein the AI prediction and explanation are embedded. Each of the 35 neurosurgeon participants read 25 brain MRI scans of patients with gliomas, and gave their judgment on the glioma grading without and with the assistance of AI’s prediction and explanation. Result Compared to the average accuracy of 82.5 ± 8.7% when physicians perform the task alone, physicians’ task performance increased to 87.7 ± 7.3% with statistical significance (p-value = 0.002) when assisted by AI prediction, and remained at almost the same level of 88.5 ± 7.0% (p-value = 0.35) with the additional AI explanation assistance. Conclusion The evaluation shows the clinical utility of AI to assist physicians on the glioma grading task. It also reveals the limitations of applying existing AI explanation techniques in clinical settings.
Bibliographic Details
Cold Spring Harbor Laboratory
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know