JustViT

About
Editors:
Contact email:
Image Version:
cb45e9b6-2549-44e6-9e0c-a385ac7b9901 — April 20, 2024
Summary
Detailed description of the algorithm can be found here: https://github.com/TomaszKubrak/Glaucoma_classification_JustRAIGS
Mechanism
Details about the target population can be found in the description of the JustRAIGS dataset: https://www.sciencedirect.com/science/article/pii/S2666914523000325
The backbone of the architecture comprises four independent Vision Transformers (ViT), preceded by optic disc detection using YoloV8 and extensive image and dataset preprocessing. The architecture accepts fundus images as input and outputs stacked probabilities of glaucoma along with binary values and 10 diagnostic features.
Interfaces
This algorithm implements all of the following input-output combinations:
Inputs | Outputs | |
---|---|---|
1 |
Validation and Performance
Left empty by the Algorithm Editors
Challenge Performance
Date | Challenge | Phase | Rank |
---|---|---|---|
April 20, 2024 | JustRAIGS | Development Phase | 3 |
April 20, 2024 | JustRAIGS | Test Phase | 7 |
Uses and Directions
This algorithm was developed for research purposes only.
Warnings
Left empty by the Algorithm Editors
Common Error Messages
Left empty by the Algorithm Editors
Information on this algorithm has been provided by the Algorithm Editors,
following the Model Facts labels guidelines from
Sendak, M.P., Gao, M., Brajer, N. et al.
Presenting machine learning model information to clinical end users with model facts labels.
npj Digit. Med. 3, 41 (2020). 10.1038/s41746-020-0253-3