Sulaiman Vesal
svesal
- United States of America
- Stanford University
- PIMED
Statistics
- Member for 7 years, 9 months
- 64 challenge submissions
Activity Overview
IDRiD
Challenge UserThis challenge evaluates automated techniques for analysis of fundus photographs. We target segmentation of retinal lesions like exudates, microaneurysms, and hemorrhages and detection of the optic disc and fovea. Also, we seek grading of fundus images according to the severity level of DR and DME.
PAVES
Challenge UserPeripheral Artery:Vein Enhanced Segmentation (PAVES) is the challenge focussed on providing easily interpretable and relevant images that can be readily understood by clinicians (vascular interventional radiologists & vascular surgeons) from MRA datasets where the venous and arterial vasculature may be equally enhanced. The setting is lower limb arterial occlusive disease where imaging of the below knee arterial vasculature is critical in planning limb salvage interventions. However, the competing demands of the high spatial resolution needed to image small vessels versus imaging time constraints where there is often a very short arteriovenous transit time for contrast passage form arterial to venous compartments makes imaging challenging. While dynamic MRA techniques can usually allow arterial imaging without venous ‘contamination’ these necessarily sacrifice spatial resolution.
Thyroid Nodule Segmentation and Classification
Challenge UserThe main topic of this TN-SCUI2020 challenge is finding automatic algorithms to accurately classify the thyroid nodules in ultrasound images. It will provide the biggest public dataset of thyroid nodule with over 4500 patient cases from different ages, genders, and were collected using different ultrasound machines. Each ultrasound image is provided with its ground truth class (benign or maglinant) and a detailed delineation of the nodule. This challenge will provide a unique opportunity for participants from different backgrounds (e.g. academia, industry, and government, etc.) to compare their algorithms in an impartial way.
CT diagnosis of COVID-19
Challenge UserCoronavirus disease 2019 (COVID-19) has infected more than 1.3 million individuals all over the world and caused more than 106,000 deaths. One major hurdle in controlling the spreading of this disease is the inefficiency and shortage of medical tests. To mitigate the inefficiency and shortage of existing tests for COVID-19, we propose this competition to encourage the development of effective Deep Learning techniques to diagnose COVID-19 based on CT images. The problem we want to solve is to classify each CT image into positive COVID-19 (the image has clinical findings of COVID-19) or negative COVID-19 ( the image does not have clinical findings of COVID-19). It’s a binary classification problem based on CT images.
A-AFMA
Challenge UserPrenatal ultrasound (US) measurement of amniotic fluid is an important part of fetal surveillance as it provides a non-invasive way of assessing whether there is oligohydramnios (insufficient amniotic fluid) and polyhydramnios (excess amniotic fluid), which are associated with numerous problems both during pregnancy and after birth. In this Image Analysis Challenge, we aim to attract attention from the image analysis community to work on the problem of automated measurement of the MVP using the predefined ultrasound video clip based on a linear-sweep protocol [1]. We define two tasks. The first task is to automatically detect amniotic fluid and the maternal bladder. The second task is to identify the appropriate points for MVP measurement given the selected frame of the video clip, and calculate the length of the connected line between these points. The data was collected from women in the second trimester of pregnancy, as part of the PURE study at the John Radcliffe Hospital in Oxford, UK.
3D Teeth Scan Segmentation and Labeling Challenge MICCAI2022
Challenge UserComputer-aided design (CAD) tools have become increasingly popular in modern dentistry for highly accurate treatment planning. In particular, in orthodontic CAD systems, advanced intraoral scanners (IOSs) are now widely used as they provide precise digital surface models of the dentition. Such models can dramatically help dentists simulate teeth extraction, move, deletion, and rearrangement and therefore ease the prediction of treatment outcomes. Although IOSs are becoming widespread in clinical dental practice, there are only few contributions on teeth segmentation/labeling available in the literature and no publicly available database. A fundamental issue that appears with IOS data is the ability to reliably segment and identify teeth in scanned observations. Teeth segmentation and labelling is difficult as a result of the inherent similarities between teeth shapes as well as their ambiguous positions on jaws.