JustViT

About
Interfaces
This algorithm implements all of the following input-output combinations:
Inputs | Outputs | |
---|---|---|
1 |
|
|
Challenge Performance
Date | Challenge | Phase | Rank |
---|---|---|---|
April 20, 2024 | JustRAIGS | Development Phase | 3 |
April 20, 2024 | JustRAIGS | Test Phase | 7 |
Model Facts
Summary
Detailed description of the algorithm can be found here: https://github.com/TomaszKubrak/Glaucoma_classification_JustRAIGS
Mechanism
Details about the target population can be found in the description of the JustRAIGS dataset: https://www.sciencedirect.com/science/article/pii/S2666914523000325
The backbone of the architecture comprises four independent Vision Transformers (ViT), preceded by optic disc detection using YoloV8 and extensive image and dataset preprocessing. The architecture accepts fundus images as input and outputs stacked probabilities of glaucoma along with binary values and 10 diagnostic features.
Validation and Performance
Uses and Directions
This algorithm was developed for research purposes only.
Warnings
Common Error Messages
Information on this algorithm has been provided by the Algorithm Editors, following the Model Facts labels guidelines from Sendak, M.P., Gao, M., Brajer, N. et al. Presenting machine learning model information to clinical end users with model facts labels. npj Digit. Med. 3, 41 (2020). 10.1038/s41746-020-0253-3