Willam Green
dskswu
- United States of America
- Student
- Student
Statistics
- Member for 5 years, 9 months
Activity Overview
PAIP2020
Challenge UserBuilt on the success of its predecessor, PAIP2020 is the second challenge organized by the Pathology AI Platform (PAIP) and the Seoul National University Hospital (SNUH). PAIP2020 will proceed to not only detect whole tumor areas in colorectal cancers but also to classify their molecular subtypes, which will lead to characterization of their heterogeneity with respect to prognoses and therapeutic responses. All participants should predict one of the molecular carcinogenesis pathways, i.e., microsatellite instability(MSI) in colorectal cancer, by performing digital image analysis without clinical tests. This task has a high clinical relevance as the currently used procedure requires an extensive microscopic assessment by pathologists. Therefore, those automated algorithms would reduce the workload of pathologists as a diagnostic assistance.
Thyroid Nodule Segmentation and Classification
Challenge UserThe main topic of this TN-SCUI2020 challenge is finding automatic algorithms to accurately classify the thyroid nodules in ultrasound images. It will provide the biggest public dataset of thyroid nodule with over 4500 patient cases from different ages, genders, and were collected using different ultrasound machines. Each ultrasound image is provided with its ground truth class (benign or maglinant) and a detailed delineation of the nodule. This challenge will provide a unique opportunity for participants from different backgrounds (e.g. academia, industry, and government, etc.) to compare their algorithms in an impartial way.
SARAS-ESAD
Challenge UserThis challenge is part of Medical Imaging with Deep Learning conference, 2020. The conference is held between 6 ‑ 8 July 2020 in Montréal. The SARAS (Smart Autonomous Robotic Assistant Surgeon) EU consortium, www.saras-project.eu, is working towards replacing the assistant surgeon in MIS with two assistive robotic arms. To accomplish that, an artificial intelligence based system is required which not only can understand the complete surgical scene but also detect the actions being performed by the main surgeon. This information can later be used infer the response required from the autonomous assistant surgeon.
CT diagnosis of COVID-19
Challenge UserCoronavirus disease 2019 (COVID-19) has infected more than 1.3 million individuals all over the world and caused more than 106,000 deaths. One major hurdle in controlling the spreading of this disease is the inefficiency and shortage of medical tests. To mitigate the inefficiency and shortage of existing tests for COVID-19, we propose this competition to encourage the development of effective Deep Learning techniques to diagnose COVID-19 based on CT images. The problem we want to solve is to classify each CT image into positive COVID-19 (the image has clinical findings of COVID-19) or negative COVID-19 ( the image does not have clinical findings of COVID-19). It’s a binary classification problem based on CT images.
A-AFMA
Challenge UserPrenatal ultrasound (US) measurement of amniotic fluid is an important part of fetal surveillance as it provides a non-invasive way of assessing whether there is oligohydramnios (insufficient amniotic fluid) and polyhydramnios (excess amniotic fluid), which are associated with numerous problems both during pregnancy and after birth. In this Image Analysis Challenge, we aim to attract attention from the image analysis community to work on the problem of automated measurement of the MVP using the predefined ultrasound video clip based on a linear-sweep protocol [1]. We define two tasks. The first task is to automatically detect amniotic fluid and the maternal bladder. The second task is to identify the appropriate points for MVP measurement given the selected frame of the video clip, and calculate the length of the connected line between these points. The data was collected from women in the second trimester of pregnancy, as part of the PURE study at the John Radcliffe Hospital in Oxford, UK.
Breast Cancer Segmentation
Challenge UserSemantic segmentation of histologic regions in scanned FFPE H&E stained slides of triple-negative breast cancer from The Cancer Genome Atlas. See: Amgad M, Elfandy H, ..., Gutman DA, Cooper LAD. Structured crowdsourcing enables convolutional segmentation of histology images. Bioinformatics. 2019. doi: 10.1093/bioinformatics/btz083
WSSS4LUAD
Challenge UserThe WSSS4LUAD dataset contains over 10,000 patches of lung adenocarcinoma from whole slide images from Guangdong Provincial People's Hospital and TCGA with image-level annotations. The goal of this challenge is to perform semantic segmentation for differentiating three important types of tissues in the WSIs of lung adenocarcinoma, including cancerous epithelial region, cancerous stroma region and normal region. Paticipants have to use image-level annotations to give pixel-level prediction.
Multi-site, Multi-Domain Airway Tree Modeling (ATM’22)
Challenge UserAirway segmentation is a crucial step for the analysis of pulmonary diseases including asthma, bronchiectasis, and emphysema. The accurate segmentation based on X-Ray computed tomography (CT) enables the quantitative measurements of airway dimensions and wall thickness, which can reveal the abnormality of patients with chronic obstructive pulmonary disease (COPD). Besides, the extraction of patient-specific airway models from CT images is required for navigatiisted surgery.
ToothFairy: Cone-Beam Computed Tomography Segmentation Challenge
Challenge UserThis is the first edition of the ToothFairy challenge organized by the University of Modena and Reggio Emilia with the collaboration of Raudboud University. This challenge aims at pushing the development of deep learning frameworks to segment the Inferior Alveolar Canal (IAC) by incrementally extending the amount of publicly available 3D-annotated Cone Beam Computed Tomography (CBCT) scans. CBCT modality is becoming increasingly important for treatment planning and diagnosis in implant dentistry and maxillofacial surgery. The three-dimensional information acquired with CBCT can be crucial to plan a vast number of surgical interventions with the aim of preserving noble anatomical structures such as the Inferior Alveolar Canal (IAC), which contains the homonymous nerve (Inferior Alveolar Nerve, IAN). Deep learning models can support medical personnel in surgical planning procedures by providing a voxel-level segmentation of the IAN automatically extracted from CBCT scans.
ToothFairy2: Multi-Structure Segmentation in CBCT Volumes
Challenge UserThis is the second edition of the ToothFairy challenge organized by the University of Modena and Reggio Emilia with the collaboration of Radboud University Medical Center. The challenge is hosted by grand-challenge and is part of MICCAI2024.