Mixed Supervision of Histopathology Improves Prostate Cancer Classification from MRI

IEEE Trans Med Imaging. 2024 Mar 28:PP. doi: 10.1109/TMI.2024.3382909. Online ahead of print.

Abstract

Non-invasive prostate cancer classification from MRI has the potential to revolutionize patient care by providing early detection of clinically significant disease, but has thus far shown limited positive predictive value. To address this, we present a image-based deep learning method to predict clinically significant prostate cancer from screening MRI in patients that subsequently underwent biopsy with results ranging from benign pathology to the highest grade tumors. Specifically, we demonstrate that mixed supervision via diverse histopathological ground truth improves classification performance despite the cost of reduced concordance with image-based segmentation. Where prior approaches have utilized pathology results as ground truth derived from targeted biopsies and whole-mount prostatectomy to strongly supervise the localization of clinically significant cancer, our approach also utilizes weak supervision signals extracted from nontargeted systematic biopsies with regional localization to improve overall performance. Our key innovation is performing regression by distribution rather than simply by value, enabling use of additional pathology findings traditionally ignored by deep learning strategies. We evaluated our model on a dataset of 973 (testing n = 198) multi-parametric prostate MRI exams collected at UCSF from 2016-2019 followed by MRI/ultrasound fusion (targeted) biopsy and systematic (nontargeted) biopsy of the prostate gland, demonstrating that deep networks trained with mixed supervision of histopathology can feasibly exceed the performance of the Prostate Imaging-Reporting and Data System (PI-RADS) clinical standard for prostate MRI interpretation (71.6% vs 66.7% balanced accuracy and 0.724 vs 0.716 AUC).