Fully convolutional neural networks improve abdominal organ segmentation

Meg F. Bobo, Shunxing Bao, Yuankai Huo, Yuang Yao, Jack Virostko, Andrew J. Plassard, Ilwoo Lyu, Albert Assad, Richard G. Abramson, Melissa A. Hilmes, Bennett A. Landman

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Citations (Scopus)

Abstract

Abdominal image segmentation is a challenging, yet important clinical problem. Variations in body size, position, and relative organ positions greatly complicate the segmentation process. Historically, multi-atlas methods have achieved leading results across imaging modalities and anatomical targets. However, deep learning is rapidly overtaking classical approaches for image segmentation. Recently, Zhou et al. showed that fully convolutional networks produce excellent results in abdominal organ segmentation of computed tomography (CT) scans. Yet, deep learning approaches have not been applied to whole abdomen magnetic resonance imaging (MRI) segmentation. Herein, we evaluate the applicability of an existing fully convolutional neural network (FCNN) designed for CT imaging to segment abdominal organs on T2 weighted (T2w) MRI's with two examples. In the primary example, we compare a classical multi-atlas approach with FCNN on forty-five T2w MRI's acquired from splenomegaly patients with five organs labeled (liver, spleen, left kidney, right kidney, and stomach). Thirty-six images were used for training while nine were used for testing. The FCNN resulted in a Dice similarity coefficient (DSC) of 0.930 in spleens, 0.730 in left kidneys, 0.780 in right kidneys, 0.913 in livers, and 0.556 in stomachs. The performance measures for livers, spleens, right kidneys, and stomachs were significantly better than multi-atlas (p < 0.05, Wilcoxon rank-sum test). In a secondary example, we compare the multi-atlas approach with FCNN on 138 distinct T2w MRI's with manually labeled pancreases (one label). On the pancreas dataset, the FCNN resulted in a median DSC of 0.691 in pancreases versus 0.287 for multi-atlas. The results are highly promising given relatively limited training data and without specific training of the FCNN model and illustrate the potential of deep learning approaches to transcend imaging modalities. 1

Original languageEnglish (US)
Title of host publicationMedical Imaging 2018
Subtitle of host publicationImage Processing
EditorsElsa D. Angelini, Elsa D. Angelini, Bennett A. Landman
PublisherSPIE
ISBN (Electronic)9781510616370
DOIs
StatePublished - Jan 1 2018
EventMedical Imaging 2018: Image Processing - Houston, United States
Duration: Feb 11 2018Feb 13 2018

Publication series

NameProgress in Biomedical Optics and Imaging - Proceedings of SPIE
Volume10574
ISSN (Print)1605-7422

Conference

ConferenceMedical Imaging 2018: Image Processing
CountryUnited States
CityHouston
Period2/11/182/13/18

Fingerprint

Atlases
organs
kidneys
Neural networks
Kidney
pancreas
stomach
spleen
Pancreas
Stomach
Liver
Spleen
Magnetic resonance imaging
Imaging techniques
Learning
liver
Nonparametric Statistics
learning
education
Image segmentation

ASJC Scopus subject areas

  • Electronic, Optical and Magnetic Materials
  • Biomaterials
  • Atomic and Molecular Physics, and Optics
  • Radiology Nuclear Medicine and imaging

Cite this

Bobo, M. F., Bao, S., Huo, Y., Yao, Y., Virostko, J., Plassard, A. J., ... Landman, B. A. (2018). Fully convolutional neural networks improve abdominal organ segmentation. In E. D. Angelini, E. D. Angelini, & B. A. Landman (Eds.), Medical Imaging 2018: Image Processing [105742V] (Progress in Biomedical Optics and Imaging - Proceedings of SPIE; Vol. 10574). SPIE. https://doi.org/10.1117/12.2293751

Fully convolutional neural networks improve abdominal organ segmentation. / Bobo, Meg F.; Bao, Shunxing; Huo, Yuankai; Yao, Yuang; Virostko, Jack; Plassard, Andrew J.; Lyu, Ilwoo; Assad, Albert; Abramson, Richard G.; Hilmes, Melissa A.; Landman, Bennett A.

Medical Imaging 2018: Image Processing. ed. / Elsa D. Angelini; Elsa D. Angelini; Bennett A. Landman. SPIE, 2018. 105742V (Progress in Biomedical Optics and Imaging - Proceedings of SPIE; Vol. 10574).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Bobo, MF, Bao, S, Huo, Y, Yao, Y, Virostko, J, Plassard, AJ, Lyu, I, Assad, A, Abramson, RG, Hilmes, MA & Landman, BA 2018, Fully convolutional neural networks improve abdominal organ segmentation. in ED Angelini, ED Angelini & BA Landman (eds), Medical Imaging 2018: Image Processing., 105742V, Progress in Biomedical Optics and Imaging - Proceedings of SPIE, vol. 10574, SPIE, Medical Imaging 2018: Image Processing, Houston, United States, 2/11/18. https://doi.org/10.1117/12.2293751
Bobo MF, Bao S, Huo Y, Yao Y, Virostko J, Plassard AJ et al. Fully convolutional neural networks improve abdominal organ segmentation. In Angelini ED, Angelini ED, Landman BA, editors, Medical Imaging 2018: Image Processing. SPIE. 2018. 105742V. (Progress in Biomedical Optics and Imaging - Proceedings of SPIE). https://doi.org/10.1117/12.2293751
Bobo, Meg F. ; Bao, Shunxing ; Huo, Yuankai ; Yao, Yuang ; Virostko, Jack ; Plassard, Andrew J. ; Lyu, Ilwoo ; Assad, Albert ; Abramson, Richard G. ; Hilmes, Melissa A. ; Landman, Bennett A. / Fully convolutional neural networks improve abdominal organ segmentation. Medical Imaging 2018: Image Processing. editor / Elsa D. Angelini ; Elsa D. Angelini ; Bennett A. Landman. SPIE, 2018. (Progress in Biomedical Optics and Imaging - Proceedings of SPIE).
@inproceedings{4502e3d5a1c746d0850f2b62b1841e49,
title = "Fully convolutional neural networks improve abdominal organ segmentation",
abstract = "Abdominal image segmentation is a challenging, yet important clinical problem. Variations in body size, position, and relative organ positions greatly complicate the segmentation process. Historically, multi-atlas methods have achieved leading results across imaging modalities and anatomical targets. However, deep learning is rapidly overtaking classical approaches for image segmentation. Recently, Zhou et al. showed that fully convolutional networks produce excellent results in abdominal organ segmentation of computed tomography (CT) scans. Yet, deep learning approaches have not been applied to whole abdomen magnetic resonance imaging (MRI) segmentation. Herein, we evaluate the applicability of an existing fully convolutional neural network (FCNN) designed for CT imaging to segment abdominal organs on T2 weighted (T2w) MRI's with two examples. In the primary example, we compare a classical multi-atlas approach with FCNN on forty-five T2w MRI's acquired from splenomegaly patients with five organs labeled (liver, spleen, left kidney, right kidney, and stomach). Thirty-six images were used for training while nine were used for testing. The FCNN resulted in a Dice similarity coefficient (DSC) of 0.930 in spleens, 0.730 in left kidneys, 0.780 in right kidneys, 0.913 in livers, and 0.556 in stomachs. The performance measures for livers, spleens, right kidneys, and stomachs were significantly better than multi-atlas (p < 0.05, Wilcoxon rank-sum test). In a secondary example, we compare the multi-atlas approach with FCNN on 138 distinct T2w MRI's with manually labeled pancreases (one label). On the pancreas dataset, the FCNN resulted in a median DSC of 0.691 in pancreases versus 0.287 for multi-atlas. The results are highly promising given relatively limited training data and without specific training of the FCNN model and illustrate the potential of deep learning approaches to transcend imaging modalities. 1",
author = "Bobo, {Meg F.} and Shunxing Bao and Yuankai Huo and Yuang Yao and Jack Virostko and Plassard, {Andrew J.} and Ilwoo Lyu and Albert Assad and Abramson, {Richard G.} and Hilmes, {Melissa A.} and Landman, {Bennett A.}",
year = "2018",
month = "1",
day = "1",
doi = "10.1117/12.2293751",
language = "English (US)",
series = "Progress in Biomedical Optics and Imaging - Proceedings of SPIE",
publisher = "SPIE",
editor = "Angelini, {Elsa D.} and Angelini, {Elsa D.} and Landman, {Bennett A.}",
booktitle = "Medical Imaging 2018",
address = "United States",

}

TY - GEN

T1 - Fully convolutional neural networks improve abdominal organ segmentation

AU - Bobo, Meg F.

AU - Bao, Shunxing

AU - Huo, Yuankai

AU - Yao, Yuang

AU - Virostko, Jack

AU - Plassard, Andrew J.

AU - Lyu, Ilwoo

AU - Assad, Albert

AU - Abramson, Richard G.

AU - Hilmes, Melissa A.

AU - Landman, Bennett A.

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Abdominal image segmentation is a challenging, yet important clinical problem. Variations in body size, position, and relative organ positions greatly complicate the segmentation process. Historically, multi-atlas methods have achieved leading results across imaging modalities and anatomical targets. However, deep learning is rapidly overtaking classical approaches for image segmentation. Recently, Zhou et al. showed that fully convolutional networks produce excellent results in abdominal organ segmentation of computed tomography (CT) scans. Yet, deep learning approaches have not been applied to whole abdomen magnetic resonance imaging (MRI) segmentation. Herein, we evaluate the applicability of an existing fully convolutional neural network (FCNN) designed for CT imaging to segment abdominal organs on T2 weighted (T2w) MRI's with two examples. In the primary example, we compare a classical multi-atlas approach with FCNN on forty-five T2w MRI's acquired from splenomegaly patients with five organs labeled (liver, spleen, left kidney, right kidney, and stomach). Thirty-six images were used for training while nine were used for testing. The FCNN resulted in a Dice similarity coefficient (DSC) of 0.930 in spleens, 0.730 in left kidneys, 0.780 in right kidneys, 0.913 in livers, and 0.556 in stomachs. The performance measures for livers, spleens, right kidneys, and stomachs were significantly better than multi-atlas (p < 0.05, Wilcoxon rank-sum test). In a secondary example, we compare the multi-atlas approach with FCNN on 138 distinct T2w MRI's with manually labeled pancreases (one label). On the pancreas dataset, the FCNN resulted in a median DSC of 0.691 in pancreases versus 0.287 for multi-atlas. The results are highly promising given relatively limited training data and without specific training of the FCNN model and illustrate the potential of deep learning approaches to transcend imaging modalities. 1

AB - Abdominal image segmentation is a challenging, yet important clinical problem. Variations in body size, position, and relative organ positions greatly complicate the segmentation process. Historically, multi-atlas methods have achieved leading results across imaging modalities and anatomical targets. However, deep learning is rapidly overtaking classical approaches for image segmentation. Recently, Zhou et al. showed that fully convolutional networks produce excellent results in abdominal organ segmentation of computed tomography (CT) scans. Yet, deep learning approaches have not been applied to whole abdomen magnetic resonance imaging (MRI) segmentation. Herein, we evaluate the applicability of an existing fully convolutional neural network (FCNN) designed for CT imaging to segment abdominal organs on T2 weighted (T2w) MRI's with two examples. In the primary example, we compare a classical multi-atlas approach with FCNN on forty-five T2w MRI's acquired from splenomegaly patients with five organs labeled (liver, spleen, left kidney, right kidney, and stomach). Thirty-six images were used for training while nine were used for testing. The FCNN resulted in a Dice similarity coefficient (DSC) of 0.930 in spleens, 0.730 in left kidneys, 0.780 in right kidneys, 0.913 in livers, and 0.556 in stomachs. The performance measures for livers, spleens, right kidneys, and stomachs were significantly better than multi-atlas (p < 0.05, Wilcoxon rank-sum test). In a secondary example, we compare the multi-atlas approach with FCNN on 138 distinct T2w MRI's with manually labeled pancreases (one label). On the pancreas dataset, the FCNN resulted in a median DSC of 0.691 in pancreases versus 0.287 for multi-atlas. The results are highly promising given relatively limited training data and without specific training of the FCNN model and illustrate the potential of deep learning approaches to transcend imaging modalities. 1

UR - http://www.scopus.com/inward/record.url?scp=85047332849&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85047332849&partnerID=8YFLogxK

U2 - 10.1117/12.2293751

DO - 10.1117/12.2293751

M3 - Conference contribution

AN - SCOPUS:85047332849

T3 - Progress in Biomedical Optics and Imaging - Proceedings of SPIE

BT - Medical Imaging 2018

A2 - Angelini, Elsa D.

A2 - Angelini, Elsa D.

A2 - Landman, Bennett A.

PB - SPIE

ER -