Connectivity-based Cortical Parcellation via Contrastive Learning on Spatial-Graph Convolution

BME Front. 2022 Mar 8:2022:9814824. doi: 10.34133/2022/9814824. eCollection 2022.

Abstract

Objective. Objective of this work is the development and evaluation of a cortical parcellation framework based on tractography-derived brain structural connectivity. Impact Statement. The proposed framework utilizes novel spatial-graph representation learning methods for solving the task of cortical parcellation, an important medical image analysis and neuroscientific problem. Introduction. The concept of "connectional fingerprint" has motivated many investigations on the connectivity-based cortical parcellation, especially with the technical advancement of diffusion imaging. Previous studies on multiple brain regions have been conducted with promising results. However, performance and applicability of these models are limited by the relatively simple computational scheme and the lack of effective representation of brain imaging data. Methods. We propose the Spatial-graph Convolution Parcellation (SGCP) framework, a two-stage deep learning-based modeling for the graph representation brain imaging. In the first stage, SGCP learns an effective embedding of the input data through a self-supervised contrastive learning scheme with the backbone encoder of a spatial-graph convolution network. In the second stage, SGCP learns a supervised classifier to perform voxel-wise classification for parcellating the desired brain region. Results. SGCP is evaluated on the parcellation task for 5 brain regions in a 15-subject DWI dataset. Performance comparisons between SGCP, traditional parcellation methods, and other deep learning-based methods show that SGCP can achieve superior performance in all the cases. Conclusion. Consistent good performance of the proposed SGCP framework indicates its potential to be used as a general solution for investigating the regional/subregional composition of human brain based on one or more connectivity measurements.