SPIE-AAPM-NCI DAIR Digital Breast Tomosynthesis Lesion Detection Challenge (DBTex) - Phase 1 Forum

Go back to competition Back to thread list Post in this thread

> Normal patients in validation

Hi,
A question regarding validation (and future test) datasets: as the main metric is said to be computed only with biopsy proven cases ("The primary performance metric will be determined using only views with a biopsied finding "), it is correct to assume that validation and test will only have biopsy proven lesions patients or will also contain normal patients? I understand that within a biopsy proven patient it will contain views with lesion and normals, but my question is at the patient level.

Best,

Robert.

Posted by: robertmarti @ Jan. 5, 2021, 10:09 a.m.

All challenge subsets (train, validation, and test) contain studies from all four groups (cancer, benign, actionable, normal) described in the dataset preprint: https://arxiv.org/abs/2011.07995

Posted by: mateuszbuda @ Jan. 5, 2021, 4:30 p.m.

Please could you define 'all' and 'positive' as per the results log?
'sensitivity_at_2_fps_all': ..., 'sensitivity_at_1_fps_positive': ..., 'sensitivity_at_2_fps_positive': ..., 'sensitivity_at_3_fps_positive': ... 'sensitivity_at_4_fps_positive': ....
Thanks

Posted by: smorrell @ Jan. 8, 2021, 10:57 a.m.

Sure, performance metrics with "all" suffix are computed on all views included in the file-paths CSV file whereas metrics with "positive" suffix are computed using only views with a biopsied finding as described in the Performance metric section: http://spie-aapm-nci-dair.westus2.cloudapp.azure.com/competitions/4#learn_the_details-evaluation

Posted by: mateuszbuda @ Jan. 8, 2021, 11:21 a.m.

Thanks Mateusz.

I think evaluate.py does not calculate the metrics with the "positive" suffix which is the competition's metric. It either does 'all' including Normal & Actionable on line 60, only cancer line 76 or only benign line 92. Have I missed something? Thx

Posted by: smorrell @ Jan. 8, 2021, 1:06 p.m.

Here is a link to the competition repository on GitHub: https://github.com/MaciejMazurowski/duke-dbt-data
It includes functions that are used for evaluation in the duke_dbt_data.py file.

Posted by: mateuszbuda @ Jan. 8, 2021, 2:30 p.m.
Post in this thread