Your mugshot

Frauke Wilm

frauke.wilm

  •  Germany
  •  Pattern Recognition Lab
  •  Computer Sciences, Friedrich-Alexander-University Erlangen-Nürnberg
  •  Website
Statistics
  • Member for 3 years, 10 months
  • 26 challenge submissions

Activity Overview

ANHIR Logo
ANHIR
Challenge User

The challenge focuses on comparing the accuracy (using manually annotated landmarks) and the approximate speed of automatic non-linear registration methods for aligning microscopy images of multi-stained histology tissue samples.

LYON19 Logo
LYON19
Challenge User

Automatic Lymphocyte detection in IHC stained specimens.

MIDOG2021 Logo
MIDOG Challenge 2021
Challenge Editor

Mitosis Domain Generalization Challenge 2021 (part of MICCAI 2021)

tiger Logo
TIGER
Challenge User

Grand challenge on automate assessment of tumor infiltrating lymphocytes in digital pathology slides of triple negative and Her2-positive breast cancers

BCNB Logo
BCNB
Challenge User

Early Breast Cancer Core-Needle Biopsy WSI Dataset

ACROBAT Logo
ACROBAT 2023
Challenge User

The ACROBAT challenge aims to advance the development of WSI registration algorithms that can align WSIs of IHC-stained breast cancer tissue sections to corresponding tissue regions that were stained with H&E. All WSIs originate from routine diagnostic workflows.

MIDOG2022 Logo
MItosis DOmain Generalization Challenge 2022
Challenge Editor

2023PAIP Logo
PAIP 2023: TC prediction in pancreatic and colon cancer
Challenge User

Tumor cellularity prediction in pancreatic cancer (supervised learning) and colon cancer (transfer learning)

LEOPARD Logo
The LEOPARD Challenge
Challenge User

COSAS Logo
Cross-Organ and Cross-Scanner Adenocarcinoma Segmentation
Challenge User

REG2025 Logo
REport Generation in pathology using Pan-Asia Giga-pixel WSIs
Challenge User

This project focuses on advancing automated pathology report generation using vision-language foundation models. It addresses the limitations of traditional NLP metrics (e.g., BLEU, METEOR, ROUGE) by emphasizing clinically relevant evaluation. The initiative includes standardized datasets, expert comparisons, and medical-domain-specific metrics to assess model performance. It also explores the integration of generated reports into diagnostic workflows with clinical feedback. To support fairness and generalizability, the challenge dataset comprises ~20,500 cases from six medical centers in Korea, Japan, India, Turkey, and Germany, promoting multicultural and multiethnic medical AI development.

Quality assessment of whole-slide images through artifact detection Logo
Quality assessment of whole-slide images through artifact detection
Algorithm User

Quality scoring with artifact detection in whole slide images; out-of-focus, tissue folds, ink, dust, pen mark, and air bubbles.

cosas-test-phase Logo
cosas-test-phase
Algorithm User