AutoPET¶
📰 News¶
March 8th:
The new autoPET-II
challenge is now online!
September 18th:
Dear participants of the autoPET challenge,
thank you for an amazing challenge. We have published the leaderboard. See you all next year for autoPET 2023 - stay tuned 😉
September 5th:
Dear participants of the autoPET challenge,
Thank you very much for your effort and your wonderful contributions! We are now in the process of analyzing the results and will contact the top teams within this week, so they can prepare their MICCAI presentations.Â
August 31st:
Dear autoPET participants,
the challenge deadline is approaching and we would like to encourage you
to submit your algorithm also to the final test set. Don‘t be
discouraged by the preliminary results as these are not representative.
For everyone who submits now, the chances are good as not many teams
have been able to provide a final submission so far.
August 27th:
We decided to extend the deadline for a few days to give participants
the opportunity to resolve technical challenges that occurred. The new
Deadline is Sunday, Sep 4th.
August 2nd:¶
The Final Test Set is online now. You can now submit your algorithms for evaluation!
July 13th:
Grand Challenge simplified the submission process of algorithms in
type-II challenges. Please see the updated
Submission if you
need to create a new algorithm.
June 3rd:¶
The database is now published on TCIA and can be downloaded from there in the DICOM format. After download, you can convert the DICOM files to e.g. the NIfTI format using scripts provided here.
May 19th:¶
We have uploaded the anthropometric and clinical information for further detailed investigations. Please see the updated Dataset page. Publication of the database (including all information) on TCIA will follow soon.¶
April 22nd:¶
The challenge github repository including the conversion scripts and our two baselines models (uNets) is now available. https://github.com/lab-midas/autoPET ¶
April 18th:¶
Training database was updated and shape mismatching masks were corrected.¶
April 13th:¶
4 datasets have a mask shape mismatch (will be replaced with matching masks soon, update to follow here): PETCT_e664932bbc/06-09-2007-NA-PET-CT Ganzkoerper primaer mit KM-68140 PETCT_7c1e6175e0/01-30-2005-NA-PET-CT Ganzkoerper primaer mit KM-42498 PETCT_448225c237/01-16-2006-NA-PET-CT Ganzkoerper primaer mit KM-96439 PETCT_dd6165ae36/06-01-2006-NA-PET-CT Ganzkoerper primaer mit KM-08084 Â April 4th:¶
The challenge github repository with all relevant information and scripts will be available soon...stay tuned¶
🎬 Introduction
Positron Emission Tomography / Computed Tomography (PET/CT) is an integral part of the diagnostic workup for various malignant solid tumor entities. Due to its wide applicability, Fluorodeoxyglucose (FDG) is the most widely used PET tracer in an oncological setting reflecting glucose consumption of tissues, e.g. typically increased glucose consumption of tumor lesions.
As part of the clinical routine analysis, PET/CT is mostly analyzed in a qualitative way by experienced medical imaging experts. Additional quantitative evaluation of PET information would potentially allow for more precise and individualized diagnostic decisions.A crucial initial processing step for quantitative PET/CT analysis is segmentation of tumor lesions enabling accurate feature extraction, tumor characterization, oncologic staging and image-based therapy response assessment. Manual lesion segmentation is however associated with enormous effort and cost and is thus infeasible in clinical routine. Automation of this task is thus necessary for widespread clinical implementation of comprehensive PET image analysis.Recent progress in automated PET/CT lesion segmentation using deep learning methods has demonstrated the principle feasibility of this task. However, despite these recent advances tumor lesion detection and segmentation in whole-body PET/CT is still a challenging task. The specific difficulty of lesion segmentation in FDG-PET lies in the fact that not only tumor lesions but also healthy organs (e.g. the brain) can have significant FDG uptake; avoiding false positive segmentations can thus be difficult. One bottleneck for progress in automated PET lesion segmentation is the limited availability of training data that would allow for algorithm development and optimization.
To promote research on machine learning-based automated tumor lesion
segmentation on whole-body FDG-PET/CT data we host the autoPET challenge
and provide a large, publicly available training data set on
TCIA:
AutoPET is hosted at the MICCAI 2022:
and supported by the
European Society for hybrid, molecular and translational
imaging (ESHI)
Figure: Example case of fused FDG-PET/CT whole-body data. The right image shows the manually segmented malignant lesions.
📋 Task¶
Automatic tumor lesion segmentation in whole-body FDG-PET/CT on large-scale database of 1014 studies of 900 patients (training database) acquired on a single site:
- accurate and fast lesion segmentation
- avoidance of false positives (brain, bladder, etc.)
Testing will be performed on 200Â 150 studies (held-out test
database) with 100 studies originating from the same hospital as the
training database and 100 50 are drawn from a different hospital
with similar acquisition protocol to assess algorithm robustness and
generalizability.