CPFTransformer: transformer fusion context pyramid medical image segmentation network

Front Neurosci. 2023 Dec 7:17:1288366. doi: 10.3389/fnins.2023.1288366. eCollection 2023.

Abstract

Introduction: The application of U-shaped convolutional neural network (CNN) methods in medical image segmentation tasks has yielded impressive results. However, this structure's single-level context information extraction capability can lead to problems such as boundary blurring, so it needs to be improved. Additionally, the convolution operation's inherent locality restricts its ability to capture global and long-distance semantic information interactions effectively. Conversely, the transformer model excels at capturing global information.

Methods: Given these considerations, this paper presents a transformer fusion context pyramid medical image segmentation network (CPFTransformer). The CPFTransformer utilizes the Swin Transformer to integrate edge perception for segmentation edges. To effectively fuse global and multi-scale context information, we introduce an Edge-Aware module based on a context pyramid, which specifically emphasizes local features like edges and corners. Our approach employs a layered Swin Transformer with a shifted window mechanism as an encoder to extract contextual features. A decoder based on a symmetric Swin Transformer is employed for upsampling operations, thereby restoring the resolution of feature maps. The encoder and decoder are connected by an Edge-Aware module for the extraction of local features such as edges and corners.

Results: Experimental evaluations on the Synapse multi-organ segmentation task and the ACDC dataset demonstrate the effectiveness of our method, yielding a segmentation accuracy of 79.87% (DSC) and 20.83% (HD) in the Synapse multi-organ segmentation task.

Discussion: The method proposed in this paper, which combines the context pyramid mechanism and Transformer, enables fast and accurate automatic segmentation of medical images, thereby significantly enhancing the precision and reliability of medical diagnosis. Furthermore, the approach presented in this study can potentially be extended to image segmentation of other organs in the future.

Keywords: Edge-Aware module; Swin Transformer; context pyramid fusion network; medical image segmentation; multiscale feature.

Grants and funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This research was funded by the Central Guided Local Science and Technology Development Fund of Shanxi Province, project name: research on key technologies for improving low-quality image quality, grant number YDZJSX2022A016 and the general program of the National Natural Science Foundation of China, project name: research on the method of cross species comparison between human and macaque based on high-precision characteristics of brain images, grant number 61976150.