Ashwin Raju
Ashwin
- United States of America
- University of Texas, Arlington
- Computer Science
Statistics
- Member for 7 years
Activity Overview
PAIP2020
Challenge UserBuilt on the success of its predecessor, PAIP2020 is the second challenge organized by the Pathology AI Platform (PAIP) and the Seoul National University Hospital (SNUH). PAIP2020 will proceed to not only detect whole tumor areas in colorectal cancers but also to classify their molecular subtypes, which will lead to characterization of their heterogeneity with respect to prognoses and therapeutic responses. All participants should predict one of the molecular carcinogenesis pathways, i.e., microsatellite instability(MSI) in colorectal cancer, by performing digital image analysis without clinical tests. This task has a high clinical relevance as the currently used procedure requires an extensive microscopic assessment by pathologists. Therefore, those automated algorithms would reduce the workload of pathologists as a diagnostic assistance.
Thyroid Nodule Segmentation and Classification
Challenge UserThe main topic of this TN-SCUI2020 challenge is finding automatic algorithms to accurately classify the thyroid nodules in ultrasound images. It will provide the biggest public dataset of thyroid nodule with over 4500 patient cases from different ages, genders, and were collected using different ultrasound machines. Each ultrasound image is provided with its ground truth class (benign or maglinant) and a detailed delineation of the nodule. This challenge will provide a unique opportunity for participants from different backgrounds (e.g. academia, industry, and government, etc.) to compare their algorithms in an impartial way.
SARAS-ESAD
Challenge UserThis challenge is part of Medical Imaging with Deep Learning conference, 2020. The conference is held between 6 ‑ 8 July 2020 in Montréal. The SARAS (Smart Autonomous Robotic Assistant Surgeon) EU consortium, www.saras-project.eu, is working towards replacing the assistant surgeon in MIS with two assistive robotic arms. To accomplish that, an artificial intelligence based system is required which not only can understand the complete surgical scene but also detect the actions being performed by the main surgeon. This information can later be used infer the response required from the autonomous assistant surgeon.
3D Teeth Scan Segmentation and Labeling Challenge MICCAI2022
Challenge UserComputer-aided design (CAD) tools have become increasingly popular in modern dentistry for highly accurate treatment planning. In particular, in orthodontic CAD systems, advanced intraoral scanners (IOSs) are now widely used as they provide precise digital surface models of the dentition. Such models can dramatically help dentists simulate teeth extraction, move, deletion, and rearrangement and therefore ease the prediction of treatment outcomes. Although IOSs are becoming widespread in clinical dental practice, there are only few contributions on teeth segmentation/labeling available in the literature and no publicly available database. A fundamental issue that appears with IOS data is the ability to reliably segment and identify teeth in scanned observations. Teeth segmentation and labelling is difficult as a result of the inherent similarities between teeth shapes as well as their ambiguous positions on jaws.