The effect of spatial resolution on deep learning classification of lung cancer histopathology

BJR Open. 2023 Aug 15;5(1):20230008. doi: 10.1259/bjro.20230008. eCollection 2023.

Abstract

Objective: The microscopic analysis of biopsied lung nodules represents the gold-standard for definitive diagnosis of lung cancer. Deep learning has achieved pathologist-level classification of non-small cell lung cancer histopathology images at high resolutions (0.5-2 µm/px), and recent studies have revealed tomography-histology relationships at lower spatial resolutions. Thus, we tested whether patterns for histological classification of lung cancer could be detected at spatial resolutions such as those offered by ultra-high-resolution CT.

Methods: We investigated the performance of a deep convolutional neural network (inception-v3) to classify lung histopathology images at lower spatial resolutions than that of typical pathology. Models were trained on 2167 histopathology slides from The Cancer Genome Atlas to differentiate between lung cancer tissues (adenocarcinoma (LUAD) and squamous-cell carcinoma (LUSC)), and normal dense tissue. Slides were accessed at 2.5 × magnification (4 µm/px) and reduced resolutions of 8, 16, 32, 64, and 128 µm/px were simulated by applying digital low-pass filters.

Results: The classifier achieved area under the curve ≥0.95 for all classes at spatial resolutions of 4-16 µm/px, and area under the curve ≥0.95 for differentiating normal tissue from the two cancer types at 128 µm/px.

Conclusions: Features for tissue classification by deep learning exist at spatial resolutions below what is typically viewed by pathologists.

Advances in knowledge: We demonstrated that a deep convolutional network could differentiate normal and cancerous lung tissue at spatial resolutions as low as 128 µm/px and LUAD, LUSC, and normal tissue as low as 16 µm/px. Our data, and results of tomography-histology studies, indicate that these patterns should also be detectable within tomographic data at these resolutions.