Winners announcement, post-challenge phase & SASHIMI23 ¶
By: mmaspero on Sept. 19, 2023, 7:28 p.m.
We are thrilled to announce the results of the challenge! Here are the winners in both Task 1 and Task 2 after the metrics normalization, as described in evaluation:
Task 1
1. SMU-MedVision
2. Jetta_Pang
3. FAYIU
4. Elekta
5. iu_mia
Task 2
1. SMU-MedVision
2. GEneRaTion
3. iu_mia
4. FAYIU
5. Smilenaxx
Congratulations to all the winning teams for their outstanding contributions and efforts in the challenge! The prizes are already on your way. The full leaderboards will be updated with the final rank by the end of the day/tomorrow: check here for task 1 and task 2.
On September 20, the post-challenge phase will open, and you will be able to submit newly developed algorithms and obtain image similarity metrics. Later this year, we will also release the plans to allow you to assess the dose metric on your own. Please be aware that ground truth data for validation and test will only be released when all the phases of the challenge are closed (expected in about 5 years).
Now, let's look at the provisional program for the event where the challenge participants will gather and show/discuss the results. Please remember that this program may vary; keep this site checked for the final timeline link. Each team/participant is responsible for the event registration via the MICCAI23 website.
The teams with valid submissions will be contacted to provide a 1-minute highlight (physically or digitally) during the SASHIMI23 workshop and present their poster (physical presence is strongly advised). Instructions about the format will be mailed directly to the teams; keep in mind that the deadline to share your material back to us is October 2.
Provisional Program - final program at https://2023.sashimi-workshop.org/program/:
- 13:30 - 13:35: Welcome + opening remarks
- 13:35 - 14:00: Two SASHIMI orals (10 + 2 min)
- 14:00 - 14:15: SynthRAD challenge overview
-
14:15 - 14:40: Three SynthRAD orals (10 + 2 min, 2x 5 + 1 min)
-
A Hybrid Network with Multi-scale Structure Extraction and Preservation for MR-to-CT Synthesis in SynthRAD2023 (10+2 min) - Read More - Zeli Chen, Kaiyi Zheng, Chuanpu Li, and Yiwen Zhang
-
Synthesis of CT images from MRI images based on nnU-Net (5+1 min) - Read More - Haowen Pang, Chuyang Ye
-
A Self-Pretraining Paradigm For CBCT-CT Translation (5+1 min) - Read More - Runqi Wang, Zheng Zhang, Ruizhi Hou, Lei Xiang, and Tao Song
-
-
14:40 - 15:20: 1-min poster highlights (9 SASHIMI + 20 SynthRAD)
- 15:20 - 16:15: Coffee + joint poster session
- 16:15 - 17:00: Keynote
- 17:00 - 17:25: Two SASHIMI orals (10 + 2 min)
- 17:25 - 17:30: Sponsor message
- 17:30 - 17:35: Award + closing
It also follows a list of all the highlights/posters (20) that will be invited during the workshops:
- Image translation using ShuffleUNet, Juhyung (Tony) Ha, Jong Sung Park
- Swin UNETR Based MRI-to-CT and CBCT-CT Synthesis Fuxin Fan, Jingna Qiu, Yixing Huang
- Paired MR-to-sCT Translation using Conditional GANs - an Application to MR-guided Radiotherapy Alexandra Alain-Beaudoin, Laurence Savard, and Silvain Bériault
- Synthetic CT generation from CBCT images: Short Paper for SynthRAD 2023 Pengxin Yu
- Generate CT from CBCT using DDIM Gengwan Li, Xueru Zhang
- MR to CT Synthesis using U-net Hongbin Guo, Zhanyao Huang
- Team KoalAI: Locally-enhanced 3D Pix2Pix GAN for Synthetic CT Generation Bowen Xin, Aaron Nicolson, Hilda Chourak, Gregg Belous, Jason Dowling
- SynthRAD Challenge Algorithm Summary for Team FGH_365 Yubo Fan, Han Liu, Ipek Oguz, and Benoit M. Dawant
- SynthDiffuson at SynthRAD 2023 Task 1: Synthesizing Computed Tomography for Radiotherapy Lujia Zhong, Zhiwei Deng, Shuo Huang, Wenhao Chi, Jianwei Zhang, Yonggang Shi
- Multi-Planar Convolutional Neural Networks for MRI and CBCT to CT Translation Gustav Muller-Franzes, Firas Khader, Daniel Truhn
- Conditional GAN is all you need for MR2CT Xia Li, Ye Zhang
- A Simple Two-stage network For MR-CT Translation Zhihao Zhang, Long Wang, Tao Song, and Lei Xiang
- Synthetic CT Generation from CBCT using MSG-GAN Lu Bai, Chenyu, Chenqi, Shaobin Wang, Yi Du
- SynthRAD 2023: Synthetic CT from MRI Derk Mus, Bram Kooiman, Rick Bergmans, Jara Linders
- Guiding Unsupervised MRI-to-CT and CBCT-to-CT synthesis using Content and style Representation by an Enhanced Perceptual synthesis (CREPs) loss task1 task2 Cedric Hemon, Valentin Boussot, Blanche Texier
- A multi-channel cycleGAN for CBCT to sCT generation Chelsea A. H. Sargeant, Edward G. A. Henderson, Dónal M. McSweeney, Aaron G. Rankin, and Denis Page
- Synthesizing 3D computed tomography from MRI or CBCT using 2.5D deep neural networks Satoshi Kondo, Satoshi Kasai, Kousuke Hirasawa
- MR to CT translation using Generative Adversarial Networks Reza Karimzadeh, Bulat Ibragimov
- Synthrad 2023 - MRI-to-sCT generation to facilitate MR-only Radiotherapy Thomas Helfer, Walter Hugo Lopez Pinaya, Francisco Pereira, Adam G. Thomas, Jessica Dafflon
- MRI-to-sCT and CBCT-to-sCT generation methods in SynthRAD2023 task 1 task 2 Zijie Chen, Enpei Wang
We look forward to an exciting workshop with exceptional presentations. Thank you to all participants for your valuable contributions, and congratulations once again to the winning teams!
The SynthRAD2023 Challenge Organizers