SPIE-AAPM-NCI-DAIR Digital Breast Tomosynthesis Cancer Detection Challenge (DBTex) - Open Benchmark

Organized by challenge-organizer - Current server time: June 1, 2023, 7:50 a.m. UTC

Previous

Validation
April 1, 2022, midnight UTC

Current

Test
April 1, 2022, midnight UTC

End

Competition Ends
Dec. 31, 2050, midnight UTC

Overview: This page serves as an indefinite validation and testing phase for the DBTex2 Digital Breast Tomosynthesis Lesion Detection Challenge (Phase 2), allowing for the evaluation of lesion detection algorithms on the validation and test sets of the public Duke BCS-DBT dataset. This enables a standardized metric for model selection (by the validation set) and lesion detection performance (by the test set).

Citation: If you use this benchmark, please reference our associated paper:

Konz N, Buda M, Gu H, et al. A Competition, Benchmark, Code, and Data for Using Artificial Intelligence to Detect Lesions in Digital Breast Tomosynthesis. JAMA Netw Open. 2023;6(2):e230524. doi:10.1001/jamanetworkopen.2023.0524

Lesion Detection Algorithm Codebases:

Please check out https://github.com/mazurowski-lab/DBT-cancer-detection-algorithms for codebases of lesion detection algorithms submitted to the challenge, as well as baseline detection methods.

Note: Participants can visit https://www.reddit.com/r/DukeDBTData/ for additional advice and discussion.

Organizers: DBTex2 was organized by the Duke Center for Artificial Intelligence in Radiology (DAIR) in collaboration with the SPIE-AAPM-NCI Grand Challenges Committee.

Major Contributors for DBTex2:

  1. Maciej Mazurowski, Duke University (maciej.mazurowski@duke.edu)
  2. Sam Armato, University of Chicago (s-armato@uchicago.edu)
  3. Karen Drukker, University of Chicago (kdrukker@uchicago.edu)
  4. Lubomir Hadjiiski, University of Michigan (lhadjisk@umich.edu)
  5. Kenny Cha, FDA (Kenny.Cha@fda.hhs.gov)
  6. Keyvan Farahani, NIH/NCI (farahank@mail.nih.gov)
  7. Mateusz Buda, Duke University (mateusz.buda@duke.edu)
  8. Jichen Yang, Duke University (jy168@duke.edu)
  9. Nick Konz, Duke University (nicholas.konz@duke.edu)
  10. Ashirbani Saha, Duke University (as698@duke.edu)
  11. Reshma Munbodh, Brown University (reshma_munbodh@brown.edu)
  12. Jinzhong Yang, MD Anderson (jyang4@mdanderson.org)
  13. Nicholas Petrick, FDA (nicholas.petrick@fda.hhs.gov)
  14. Justin Kirby, NIH/NCI (kirbyju@mail.nih.gov)
  15. Jayashree Kalpathy-Cramer, Harvard University (kalpathy@nmr.mgh.harvard.edu)
  16. Benjamin Bearce, Massachusetts General Hospital (kalpathy@nmr.mgh.harvard.edu)

Formatting the submission file:

The output of your method submitted to the evaluation system should be a single CSV file with the following columns:

  1. PatientID: string - patient identifier
  2. StudyUID: string - study identifier
  3. View: string - view name, one of: RLL, LCC, RMLO, LMLO
  4. X: integer - X coordinate (on the horizontal axis) of the left edge of the predicted bounding box in 0-based indexing (for the left-most column of the image x=0)
  5. Width: integer - predicted bounding box width (along the horizontal axis)
  6. Y: integer - Y coordinate (on the vertical axis) of the top edge of the predicted bounding box in 0-based indexing (for the top-most column of the image y=0)
  7. Height: integer - predicted bounding box height (along the vertical axis)
  8. Z: integer - the first bounding box slice number in 0-based indexing (for the first slice of the image z=0)
  9. Depth: integer - predicted bounding box slice span (size along the depth axis)
  10. Score: float - predicted bounding box confidence score on arbitrary scale, unified across all the cases within a single submission (e.g. 0.0 – 1.0)

Ex:

PatientID,StudyUID,View,X,Width,Y,Height,Z,Depth,Score ID1,UID1,RLL,X(int),Width(int),Y(int),Height(int),Z(int),Depth(int),Score(float) ID2,UID2,LCC,X(int),Width(int),Y(int),Height(int),Z(int),Depth(int),Score(float) ID3,UID3,RMLO,X(int),Width(int),Y(int),Height(int),Z(int),Depth(int),Score(float) ID4,UID4,LMLO,X(int),Width(int),Y(int),Height(int),Z(int),Depth(int),Score(float)

Coordinates of the predicted bounding boxes should be given for the correct image orientation. In the official competition GitHub repository, we provide a python function for loading image data from a DICOM file into 3D array of pixel values.

Each entry (row) in the submission file must correspond to exactly one predicted bounding box. There may be arbitrary number of predicted bounding boxes for each DBT volume. It is not required to have predictions for all DBT volumes.

Example submission file will be provided on the competition website.

Definition of a true-positive detection

A predicted box is going to be counted as a true positive if the distance in pixels in the original image between its center point and the center of a ground truth box is less than half of its diagonal or 100 pixels, whichever is larger.

In terms of the third dimension, the ground truth bounding box is assumed to span 25% of volume slices before and after the ground truth center slice and the predicted box center slice is required to be included in this range to be considered a true positive.

Performance metric:

The overall performance will be assessed as the average sensitivity for 1, 2, 3, and 4 FP/s per volume. The competition performance will be assessed only on studies containing a biopsied lesion.

Terms and Conditions

Participants are encouraged to use this data beyond this benchmark and consistently with the use conditions described on the TCIA website https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=64685580). The use of the data should be acknowledged as described on the TCIA website.

Validation

Start: April 1, 2022, midnight

Test

Start: April 1, 2022, midnight

Competition Ends

Dec. 31, 2050, midnight

You must be logged in to participate in competitions.

Sign In