U-NetCTS: U-Net deep neural network for fully automatic segmentation of 3D CT DICOM volume

O. Dorgham, M. Abu Naser, M. H. Ryalat, A. Hyari, N. Al-Najdawi, S. Mirjalili

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)

Abstract

The accurate segmentation of computed tomography (CT) scan volume is an essential step in radiomic analysis as well as in developing advanced surgical planning techniques with numerous medical applications. When this process is performed manually by a clinician, it is laborious, time consuming, prone to error, and its success depends to a large extent on the level of experience. In this work, we propose an automated deep learning (DL) segmentation framework for CT images called U-Net CT Segmentation (U-NetCTS) to combine the DL U-Net and CT images in the domain of automatic segmentation. Experimental results show that U-NetCTS framework can segment different CT DICOM image regions of interest in a range of random CT volumes. A statistical and qualitative comparison of the CT slices automatically segmented by U-NetCTS framework and ground-truth images indicates that U-NetCTS framework achieves a high level of accuracy, where the Tanimoto coefficient, dice similarity coefficient, and peak signal-to-noise ratio values are 99.06%, 99.52%, and 53.29 dB, respectively. The DC value is also higher than that of state-of-the-art DL techniques for automatic segmentation of CT images of various human organs. Furthermore, a total amount of 3595 CT slices is employed in this study with various CT region of interest to validate the results.

Original languageEnglish
Article number100304
JournalSmart Health
Volume26
DOIs
Publication statusPublished - Dec 2022

Keywords

  • Automatic segmentation
  • CT Volume
  • Deep learning
  • DICOM images
  • Machine learning
  • Neural networks
  • U-net

Fingerprint

Dive into the research topics of 'U-NetCTS: U-Net deep neural network for fully automatic segmentation of 3D CT DICOM volume'. Together they form a unique fingerprint.

Cite this