SPIE-AAPM-NCI DAIR Digital Breast Tomosynthesis Lesion Detection Challenge (DBTex) - Phase 1

Organized by challenge-organizer - Current server time: June 1, 2023, 10 a.m. UTC

Previous

Test
Jan. 15, 2021, midnight UTC

Current

Validation
Jan. 4, 2021, midnight UTC

End

Competition Ends
Jan. 26, 2021, 8 a.m. UTC

Overview:

We ask for participation in the DBTex Grand Challenge by submitting algorithms for the detection of biopsy-proven breast lesions on digital breast tomosynthesis (DBT) images. The results of the competition will be announced at the special session of the SPIE Medical Imaging 2021 conference. Participants in the first DBTex Grand Challenge are encouraged to submit their work for peer review to the SPIE’s Journal of Medical Imaging.

Citation: Please refer to our associated paper about this challenge, its results, and resources:

Konz N, Buda M, Gu H, et al. A Competition, Benchmark, Code, and Data for Using Artificial Intelligence to Detect Lesions in Digital Breast Tomosynthesis. JAMA Netw Open. 2023;6(2):e230524. doi:10.1001/jamanetworkopen.2023.0524

Lesion Detection Algorithm Codebases:

Please check out https://github.com/mazurowski-lab/DBT-cancer-detection-algorithms for codebases of lesion detection algorithms submitted to the challenge, as well as baseline detection methods.

Organizers:

This challenge is organized by SPIE (the international society for optics and photonics), The American Association of Physicists in Medicine (AAPM), the National Cancer Institute (NCI), and Duke Center for Artificial Intelligence in Radiology (DAIR).

Prizes:

The winning team will receive a $1000 prize sponsored by Duke Center for Artificial Intelligence in Radiology (DAIR), contingent on making their code publicly available on GitHub and giving a presentation during the special challenge session at the SPIE Medical Imaging Symposium (2021). Depositing of code is only required in order to be eligible to win the $1000 prize, but all participants are encouraged to make their code publicly available. Additionally, two individuals from each of the two top-performing teams as well as one individual from the third best performing team will receive a waiver of the meeting registration fee in order to present their methods during the SPIE Medical Imaging Conference.

Important Dates:

  1. Release date of training set cases with truth: December 14, 2020
  2. Release date of validation set cases: January 4, 2020
  3. Release date of test set cases: January 15, 2021
  4. Submission deadline for participants’ test set output: January 25, 2021
  5. Challenge results released to participants: February 4, 2021
  6. SPIE Medical Imaging Symposium: February 14–18, 2021

 Organizers and Major Contributors:

  1. Maciej Mazurowski, Duke University (maciej.mazurowski@duke.edu)
  2. Sam Armato, University of Chicago (s-armato@uchicago.edu)
  3. Karen Drukker, University of Chicago (kdrukker@uchicago.edu)
  4. Lubomir Hadjiiski, University of Michigan (lhadjisk@umich.edu)
  5. Kenny Cha, FDA (Kenny.Cha@fda.hhs.gov)
  6. Keyvan Farahani, NIH/NCI (farahank@mail.nih.gov)
  7. Mateusz Buda (mateusz.buda@duke.edu)
  8. Jichen Yang (jy168@duke.edu)
  9. Reshma Munbodh, Brown University (reshma_munbodh@brown.edu)
  10. Jinzhong Yang, MD Anderson (jyang4@mdanderson.org)
  11. Nicholas Petrick, FDA (nicholas.petrick@fda.hhs.gov)
  12. Justin Kirby, NIH/NCI (kirbyju@mail.nih.gov)
  13. Jayashree Kalpathy-Cramer, Harvard University (kalpathy@nmr.mgh.harvard.edu)
  14. Benjamin Bearce, Massachusetts General Hospital (bbearce@nmr.mgh.harvard.edu)
  15. Diane Cline, SPIE (diane@spie.org)

Questions?

Please visit https://www.reddit.com/r/DukeDBTData/ for a discussion forum.

Task:

To detect breast lesions that subsequently underwent biopsy and provide the location and size of a bounding box, as well as a confidence score for each detected lesion candidate. The dataset contains DBT exams with breast cancers, biopsy-proven benign lesions, actionable non-biopsied findings, as well as normals (scans without any findings). The task is to detect biopsy-proven lesions (masses or architectural distorions) only. 

Definition of a true-positive detection:

A predicted box is going to be counted as a true positive if the distance in pixels in the original image between its center point and the center of a ground truth box is less than half of the ground truth box diagonal or 100 pixels, whichever is larger.

In terms of the third dimension, the ground truth bounding box is assumed to span 25% of volume slices before and after the ground truth center slice and the predicted box center slice is required to be included in this range to be considered a true positive.

Actionable lesions that did not undergo biopsy do not have annotations (ground truth boxes). 

Performance metric:

The primary performance metric is the average sensitivity for 1, 2, 3, and 4 false-positives per DBT view. The primary performance metric will be determined using only views with a biopsied finding. The secondary performance metric is the sensitivity for 2 FP/image for all test views as assessed in https://arxiv.org/pdf/2011.07995.pdf. Submissions will be ranked using the primary performance metric and the secondary performance metric will be used as a tie-breaker.


Submission format:

Submissions to the system should contain output results for all cases in a single CSV file and give the location and size of a bounding box for each detected lesion candidate as well as a confidence score that this detection represents an actual lesion. 

Formatting your submission file:

The output of your method submitted to the evaluation system should be a single CSV file with the following columns:

  1. PatientID: string - patient identifier
  2. StudyUID: string - study identifier
  3. View: string - view name, one of: RLL, LCC, RMLO, LMLO
  4. X: integer - X coordinate (on the horizontal axis) of the left edge of the predicted bounding box in 0-based indexing (for the left-most column of the image x=0)
  5. Width: integer - predicted bounding box width (along the horizontal axis)
  6. Y: integer - Y coordinate (on the vertical axis) of the top edge of the predicted bounding box in 0-based indexing (for the top-most column of the image y=0)
  7. Height: integer - predicted bounding box height (along the vertical axis)
  8. Z: integer - the first bounding box slice number in 0-based indexing (for the first slice of the image z=0)
  9. Depth: integer - predicted bounding box slice span (size along the depth axis)
  10. Score: float - predicted bounding box confidence score indicating the confidence level that the detection represents an actual lesion. This score can have an arbitrary scale, but has to be unified across all the cases within a single submission (e.g. 0.0 – 1.0)

Example:

PatientID,StudyUID,View,X,Width,Y,Height,Z,Depth,Score ID1,UID1,RLL,X(int),Width(int),Y(int),Height(int),Z(int),Depth(int),Score(float) ID2,UID2,LCC,X(int),Width(int),Y(int),Height(int),Z(int),Depth(int),Score(float) ID3,UID3,RMLO,X(int),Width(int),Y(int),Height(int),Z(int),Depth(int),Score(float) ID4,UID4,LMLO,X(int),Width(int),Y(int),Height(int),Z(int),Depth(int),Score(float) 

Each entry (row) in the submission file must correspond to exactly one predicted bounding box. There may be arbitrary number of predicted bounding boxes for each DBT volume. It is not required to have predictions for all DBT volumes.

Important:

Required preprocessing of DBT images: Coordinates of the predicted bounding boxes should be given for the correct image orientation. For some of the images, the laterality stored in the DICOM header and/or the image orientation is incorrect. The reference standard "truth" boxes are defined with respect to the corrected image orientation in these instances. Therefore, it is crucial to provide your results for images in the correct image orientation. Python functions for loading image data from a DICOM file into 3D array of pixel values in the correct orientation and for displaying "truth" boxes (if any) are on GitHub. Please see the readme file there for instructions.

NOTE that the in the test phase of the challenge, the submission of test set output will not be considered complete unless it is accompanied by (1) an agreement to be acknowledged in the Acknowledgment section (by name and institution—but without any link to the performance score of your particular method) of any manuscript that results from the challenge and (2) a one-paragraph statement of the methods used to obtain the submitted results, including information regarding the approach and data set(s) used to train your system, the image analysis and segmentation (if applicable) methods used, the type of classifier, and any relevant references for these methods that should be cited. This may be used in the challenge overview manuscript. Furthrmore, participants are encouraged to make their developed code publicly available on GitHub; the winning team is required to deposit their code, and to present their work in the special challenge session at the SPIE Medical Imaging Meeting (2021), to be eligible for the $1000 prize.

Test Phase Submission (Added Jan 25th): 

Test phase submission


Questions?

Please visit https://www.reddit.com/r/DukeDBTData/ for a discussion forum.

 

Terms and Conditions:

By participating in this challenge, each participant agrees to:

  1. Detect lesions that were subsequently sent to biopsy. Those include both cancers and benign lesions.
  2. Attest that they are not directly affiliated with the labs of any of the DBTex organizers or major contributors. Please refer to the Challenge Organizer Guidance document of the AAPM Working Group on Grand Challenges (https://www.aapm.org/GrandChallenge/documents/ChallengeOrganizerGuidance.pdf).
  3. Use the data consistently with the use conditions described on the TCIA website https://creativecommons.org/licenses/by-nc/4.0/ (which allows for use of this data beyond this competition) and acknowledge the use of this data as described on the TCIA website https://doi.org/10.7937/e4wt-cd02.
  4. Abide by the Challenge Rules (below)

Challenge rules:

Publication:

After the challenge has concluded, the challenge organizers may develop presentations and/or a manuscript for submission to a peer-reviewed journal that describe the motivation, development, conduct, and analysis of the challenge results. By participating in this challenge, you are agreeing for your submissions/results (and possible derivatives) to become a part of such a manuscript. Your contribution will be acknowledged in the Acknowledgments section or as a co-author/group co-author. These communications will provide overviews of the challenge and the methods/results. The individuals will be free to additionally published details of their methods and their results in their own publication. Participants are encouraged to submit such individual manuscripts to the SPIE’s Journal of Medical Imaging or the AAPM’s Medical Physics as well as other venues. 

Data use: 

Participants are free to download the training set and, subsequently, the validation and test sets when these datasets become available. Use of the data is subject to the data license.  

Training phase:

Participants may use the training set cases in any manner they would like for the purpose of training their systems (consistent with the data license); there will be no restrictions on the advice sought from local experts for training purposes. The participants can also combine the provided training data with other data if they disclose that in the description of the algorithm.

Validation and test phases: 

The validation set and test set cases, however, are to be manipulated, processed, and analyzed without human intervention.

In the validation phase of the challenge, a maximum of 50 submissions is allowed per research team. The validation phase is the only phase with a leaderboard.

In the test phase of the challenge, a maximum of 3 submissions is allowed per research team. Participants will not receive the scores for their test set submissions until after the challenge has closed. The best score of a team determines the placement within the challenge.

It is important to note that once participants submit their test set output to the challenge organizers, they will be considered fully vested in the challenge, so that their submissions and performance results will become part of any presentations, publications, or subsequent analyses derived from the challenge at the discretion of the organizers.

The submission of test set output will not be considered complete unless it is accompanied by (1) an agreement to be listed as a member of the DBTex Challenge Group by name and institution (or a request not to be so listed) and (2) a one-paragraph statement of the methods used to obtain the submitted results, including information regarding the approach and data set(s) used to train your system, the image analysis and segmentation (if applicable) methods used, the type of classifier, and any relevant references for these methods that should be cited in the challenge overview manuscript.

The truth associated with the test set cases is expected to be made publicly available after publication of the DBTex Challenges.

Software/Methods: 

Challenges are designed to motivate and reward novel computational approaches to a defined task. The use of commercial software (unless your group is affiliated with the organization that holds intellectual property rights to that software) or open source software (unless your group has a recognized association with the creation of that software) is not allowed, unless you can clearly demonstrate an innovative use, alteration, or enhancement to the application of such software.

Participants are encouraged to make their developed code publicly available on GitHub; the winning team is required to deposit their code and to give a presentation during the special challenge session at the SPIE Medical Imaging Symposium (2021) to be eligible for the $1000 prize.

General conduct:

Participation in the DBTex Challenges acknowledges the educational, friendly competition, and community-building nature of the challenges and commits to conduct consistent with this spirit for the advancement of the medical imaging research community. See this article for a discussion of lessons learned from the LUNGx Challenge, also sponsored by SPIE, AAPM, and NCI. 


Questions?

Please visit https://www.reddit.com/r/DukeDBTData/ for a discussion forum.

Training

Start: Dec. 14, 2020, midnight

Validation

Start: Jan. 4, 2021, midnight

Test

Start: Jan. 15, 2021, midnight

Competition Ends

Jan. 26, 2021, 8 a.m.

You must be logged in to participate in competitions.

Sign In
# Username Score
1 nyu_bteam 1.000
2 ZeDuS 2.000
3 pagarwal 4.000