RADHawk—an AI-based knowledge recommender to support precision education, improve reporting productivity, and reduce cognitive load
Pediatric Radiology, ISSN: 1432-1998, Vol: 55, Issue: 2, Page: 259-267
2024
- 1Captures
Metric Options: Counts1 Year3 YearSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Captures1
- Readers1
Article Description
Background: Using artificial intelligence (AI) to augment knowledge is key to establishing precision education in modern radiology training. Our department has developed a novel AI-derived knowledge recommender, the first reported precision education program in radiology, RADHawk (RH), that augments the training of radiology residents and fellows by pushing personalized and relevant educational content in real-time and in context with the case being interpreted. Purpose: To assess the impact on trainees of an AI-based knowledge recommender compared to traditional knowledge sourcing for radiology reporting through reporting time, quality, cognitive load, and learning experiences. Materials and methods: A mixed methods prospective study allocated trainees to intervention and control groups, working with and without access to RH, respectively. Validated questionnaires and observed and graded simulated picture archiving and communication system (PACS)-based reporting at the start and end of a month’s rotation assessed technology acceptance, case report quality, case report time and sourcing time, cognitive load, and attitudes toward modified learning strategies. Non-parametric regression analyses and Mann–Whitney tests were used to compare outcomes between groups, with significance set at P<0.05. Results: The intervention group (n=28) demonstrated a statistically significant reduction in the case report time by -162 s per case (95%CI -275.76 s to -52.40 s) (P-value = 0.002) and an increase of 14% (95%CI 8.1–19.8%) (P-value <0.001) in accuracy scores compared to the control group (n=29) at the end of the rotation. The intervention group also showed lower levels of mental demand (P=0.030) and experienced less effort (P=0.030) and frustration (P=0.030) while reporting. Additionally, >78% of the intervention group gave positive ratings on RH’s effectiveness, increase in productivity, job usefulness, and ease of use. Eighty-nine percent of participants in the intervention group requested access to RH for their next rotation. Conclusion: This study demonstrates that RH, as the first reported AI-derived knowledge recommender for radiology education, significantly reduces reporting time and improves reporting accuracy while reducing overall workload and mental demand for radiology trainees. The high acceptance among trainees suggests its potential for supporting self-directed learning. Further testing of a larger external cohort will support more widespread implementation of RH for precision education.
Bibliographic Details
http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85211779736&origin=inward; http://dx.doi.org/10.1007/s00247-024-06116-y; http://www.ncbi.nlm.nih.gov/pubmed/39644355; https://link.springer.com/10.1007/s00247-024-06116-y; https://dx.doi.org/10.1007/s00247-024-06116-y; https://link.springer.com/article/10.1007/s00247-024-06116-y
Springer Science and Business Media LLC
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know