PlumX Metrics
Embed PlumX Metrics

Deep Imbalanced Regression Model for Predicting Refractive Error from Retinal Photos

Ophthalmology Science, ISSN: 2666-9145, Vol: 5, Issue: 2, Page: 100659
2025
  • 0
    Citations
  • 0
    Usage
  • 1
    Captures
  • 0
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

Article Description

Recent studies utilized ocular images and deep learning (DL) to predict refractive error and yielded notable results. However, most studies did not address biases from imbalanced datasets or conduct external validations. To address these gaps, this study aimed to integrate the deep imbalanced regression (DIR) technique into ResNet and Vision Transformer models to predict refractive error from retinal photographs. Retrospective study. We developed the DL models using up to 103 865 images from the Singapore Epidemiology of Eye Diseases Study and the United Kingdom Biobank, with internal testing on up to 8067 images. External testing was conducted on 7043 images from the Singapore Prospective Study and 5539 images from the Beijing Eye Study. Retinal images and corresponding refractive error data were extracted. This retrospective study developed regression-based models, including ResNet34 with DIR, and SwinV2 (Swin Transformer) with DIR, incorporating Label Distribution Smoothing and Feature Distribution Smoothing. These models were compared against their baseline versions, ResNet34 and SwinV2, in predicting spherical and spherical equivalent (SE) power. Mean absolute error (MAE) and coefficient of determination were used to evaluate the models’ performances. The Wilcoxon signed-rank test was performed to assess statistical significance between DIR-integrated models and their baseline versions. For prediction of the spherical power, ResNet34 with DIR (MAE: 0.84D) and SwinV2 with DIR (MAE: 0.77D) significantly outperformed their baseline—ResNet34 (MAE: 0.88D; P  < 0.001) and SwinV2 (MAE: 0.87D; P  < 0.001) in internal test. For prediction of the SE power, ResNet34 with DIR (MAE: 0.78D) and SwinV2 with DIR (MAE: 0.75D) consistently significantly outperformed its baseline—ResNet34 (MAE: 0.81D; P  < 0.001) and SwinV2 (MAE: 0.78D; P  < 0.05) in internal test. Similar trends were observed in external test sets for both spherical and SE power prediction. Deep imbalanced regressed–integrated DL models showed potential in addressing data imbalances and improving the prediction of refractive error. These findings highlight the potential utility of combining DL models with retinal imaging for opportunistic screening of refractive errors, particularly in settings where retinal cameras are already in use. Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

Bibliographic Details

Yew, Samantha Min Er; Lei, Xiaofeng; Chen, Yibing; Goh, Jocelyn Hui Lin; Pushpanathan, Krithi; Xue, Can Can; Wang, Ya Xing; Jonas, Jost B; Sabanayagam, Charumathi; Koh, Victor Teck Chang; Xu, Xinxing; Liu, Yong; Cheng, Ching-Yu; Tham, Yih-Chung

Elsevier BV

Medicine

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know