Skip to main content
Advertisement

Main menu

  • Home
  • Content
    • Current Issue
    • Accepted Manuscripts
    • Article Preview
    • Past Issue Archive
    • Video Articles
    • AJNR Case Collection
    • Case of the Week Archive
    • Case of the Month Archive
    • Classic Case Archive
  • Special Collections
    • AJNR Awards
    • Low-Field MRI
    • Alzheimer Disease
    • ASNR Foundation Special Collection
    • Photon-Counting CT
    • View All
  • Multimedia
    • AJNR Podcasts
    • AJNR SCANtastic
    • Trainee Corner
    • MRI Safety Corner
    • Imaging Protocols
  • For Authors
    • Submit a Manuscript
    • Submit a Video Article
    • Submit an eLetter to the Editor/Response
    • Manuscript Submission Guidelines
    • Statistical Tips
    • Fast Publishing of Accepted Manuscripts
    • Graphical Abstract Preparation
    • Imaging Protocol Submission
    • Author Policies
  • About Us
    • About AJNR
    • Editorial Board
    • Editorial Board Alumni
  • More
    • Become a Reviewer/Academy of Reviewers
    • Subscribers
    • Permissions
    • Alerts
    • Feedback
    • Advertisers
    • ASNR Home

User menu

  • Alerts
  • Log in

Search

  • Advanced search
American Journal of Neuroradiology
American Journal of Neuroradiology

American Journal of Neuroradiology

ASHNR American Society of Functional Neuroradiology ASHNR American Society of Pediatric Neuroradiology ASSR
  • Alerts
  • Log in

Advanced Search

  • Home
  • Content
    • Current Issue
    • Accepted Manuscripts
    • Article Preview
    • Past Issue Archive
    • Video Articles
    • AJNR Case Collection
    • Case of the Week Archive
    • Case of the Month Archive
    • Classic Case Archive
  • Special Collections
    • AJNR Awards
    • Low-Field MRI
    • Alzheimer Disease
    • ASNR Foundation Special Collection
    • Photon-Counting CT
    • View All
  • Multimedia
    • AJNR Podcasts
    • AJNR SCANtastic
    • Trainee Corner
    • MRI Safety Corner
    • Imaging Protocols
  • For Authors
    • Submit a Manuscript
    • Submit a Video Article
    • Submit an eLetter to the Editor/Response
    • Manuscript Submission Guidelines
    • Statistical Tips
    • Fast Publishing of Accepted Manuscripts
    • Graphical Abstract Preparation
    • Imaging Protocol Submission
    • Author Policies
  • About Us
    • About AJNR
    • Editorial Board
    • Editorial Board Alumni
  • More
    • Become a Reviewer/Academy of Reviewers
    • Subscribers
    • Permissions
    • Alerts
    • Feedback
    • Advertisers
    • ASNR Home
  • Follow AJNR on Twitter
  • Visit AJNR on Facebook
  • Follow AJNR on Instagram
  • Join AJNR on LinkedIn
  • RSS Feeds

AJNR Awards, New Junior Editors, and more. Read the latest AJNR updates

Research ArticlePediatric Neuroimaging
Open Access

Automated 3D Fetal Brain Segmentation Using an Optimized Deep Learning Approach

L. Zhao, J.D. Asis-Cruz, X. Feng, Y. Wu, K. Kapse, A. Largent, J. Quistorff, C. Lopez, D. Wu, K. Qing, C. Meyer and C. Limperopoulos
American Journal of Neuroradiology March 2022, 43 (3) 448-454; DOI: https://doi.org/10.3174/ajnr.A7419
L. Zhao
aFrom the Department of Diagnostic Imaging and Radiology (L.Z., J.D.A.-C., Y.W., K.K., A.L., J.Q., C. Lopez, C. Limperopoulos), Developing Brain Institute, Children’s National, Washington, DC
bDepartment of Biomedical Engineering (L.Z., D.W.), Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for L. Zhao
J.D. Asis-Cruz
aFrom the Department of Diagnostic Imaging and Radiology (L.Z., J.D.A.-C., Y.W., K.K., A.L., J.Q., C. Lopez, C. Limperopoulos), Developing Brain Institute, Children’s National, Washington, DC
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for J.D. Asis-Cruz
X. Feng
cDepartment of Biomedical Engineering (X.F., C.M.), University of Virginia, Charlottesville, Virginia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for X. Feng
Y. Wu
aFrom the Department of Diagnostic Imaging and Radiology (L.Z., J.D.A.-C., Y.W., K.K., A.L., J.Q., C. Lopez, C. Limperopoulos), Developing Brain Institute, Children’s National, Washington, DC
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Y. Wu
K. Kapse
aFrom the Department of Diagnostic Imaging and Radiology (L.Z., J.D.A.-C., Y.W., K.K., A.L., J.Q., C. Lopez, C. Limperopoulos), Developing Brain Institute, Children’s National, Washington, DC
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for K. Kapse
A. Largent
aFrom the Department of Diagnostic Imaging and Radiology (L.Z., J.D.A.-C., Y.W., K.K., A.L., J.Q., C. Lopez, C. Limperopoulos), Developing Brain Institute, Children’s National, Washington, DC
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for A. Largent
J. Quistorff
aFrom the Department of Diagnostic Imaging and Radiology (L.Z., J.D.A.-C., Y.W., K.K., A.L., J.Q., C. Lopez, C. Limperopoulos), Developing Brain Institute, Children’s National, Washington, DC
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
C. Lopez
aFrom the Department of Diagnostic Imaging and Radiology (L.Z., J.D.A.-C., Y.W., K.K., A.L., J.Q., C. Lopez, C. Limperopoulos), Developing Brain Institute, Children’s National, Washington, DC
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for C. Lopez
D. Wu
bDepartment of Biomedical Engineering (L.Z., D.W.), Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for D. Wu
K. Qing
dDepartment of Radiation Oncology (K.Q.), City of Hope National Center, Duarte, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for K. Qing
C. Meyer
cDepartment of Biomedical Engineering (X.F., C.M.), University of Virginia, Charlottesville, Virginia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for C. Meyer
C. Limperopoulos
aFrom the Department of Diagnostic Imaging and Radiology (L.Z., J.D.A.-C., Y.W., K.K., A.L., J.Q., C. Lopez, C. Limperopoulos), Developing Brain Institute, Children’s National, Washington, DC
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for C. Limperopoulos
  • Article
  • Figures & Data
  • Info & Metrics
  • Responses
  • References
  • PDF
Loading

Abstract

BACKGROUND AND PURPOSE: MR imaging provides critical information about fetal brain growth and development. Currently, morphologic analysis primarily relies on manual segmentation, which is time-intensive and has limited repeatability. This work aimed to develop a deep learning–based automatic fetal brain segmentation method that provides improved accuracy and robustness compared with atlas-based methods.

MATERIALS AND METHODS: A total of 106 fetal MR imaging studies were acquired prospectively from fetuses between 23 and 39 weeks of gestation. We trained a deep learning model on the MR imaging scans of 65 healthy fetuses and compared its performance with a 4D atlas-based segmentation method using the Wilcoxon signed-rank test. The trained model was also evaluated on data from 41 fetuses diagnosed with congenital heart disease.

RESULTS: The proposed method showed high consistency with the manual segmentation, with an average Dice score of 0.897. It also demonstrated significantly improved performance (P < .001) based on the Dice score and 95% Hausdorff distance in all brain regions compared with the atlas-based method. The performance of the proposed method was consistent across gestational ages. The segmentations of the brains of fetuses with high-risk congenital heart disease were also highly consistent with the manual segmentation, though the Dice score was 7% lower than that of healthy fetuses.

CONCLUSIONS: The proposed deep learning method provides an efficient and reliable approach for fetal brain segmentation, which outperformed segmentation based on a 4D atlas and has been used in clinical and research settings.

ABBREVIATIONS:

BS
brain stem
CGM
cortical GM
CNN
convolutional neural network
CHD
congenital heart disease
DGM
deep GM
GA
gestational age

In vivo fetal brain MR imaging has provided critical insight into normal fetal brain development and has led to improved and more accurate diagnoses of brain abnormalities in the high-risk fetus.1 Morphologic fetal MR imaging studies have been used to quantify disturbances in fetal brain development associated with congenital heart disease (CHD).2 However, image segmentation, an essential step in morphologic analysis, is time-consuming and prone to inter-/intraobserver variability.

There are 3 major challenges in fetal MR imaging that affect image quality and reliable anatomic delineation. First, fetal brain anatomy changes rapidly with advancing gestational age (GA), resulting in dramatic morphologic changes in brain tissues. Cortical maturation (ie, gyrification and sulcation) during the second and third trimesters transforms the smooth fetal surface into a highly convoluted structure. Second, changes in water content accompanying active myelination introduce high variations in MR imaging signal intensity and contrast across GAs.3,4 Third, at times, artifacts corrupt fetal images. For example, maternal respiration and irregular fetal movements often result in motion artifacts. Differences in conductivity between amniotic fluid and tissues can cause standing wave artifacts. In addition, the large FOV for the maternal abdomen and limited scan time result in reduced image resolution and partial volume effects, in which a single image voxel may contain mixed tissues.5 These artifacts are more severe in fetal brains than in adult brains. Altogether, these 3 issues hamper fetal brain segmentation.

Because of the limited availability of fetal data, preterm infant brain segmentation is primarily studied as an intermediate approach. Spatiotemporal atlases have been proposed to segment the brain from 28 weeks onward6 and the infant brain at 9–15 months7 and at 0–2 years.8 To address the tissue contrast changes and artifacts, Xue et al4 proposed a modified expectation-maximization method to reduce partial volume effects with subject-specific atlases. Shi et al9 developed a method combining subject-specific characteristics and a conventional atlas with similarity weights. Wang et al10 proposed a patch-based approach based on a subject-specific atlas. We refer you to Devi et al11 for a comprehensive review. Although the fetal brain can be segmented using atlases developed from preterm infants, differences between fetal and preterm brains have been reported, including brain volume12 and neural connectivity13 differences.

In recent years, several groups have developed fetal brain atlases that serve as useful resources for direct segmentation of the fetal brain.14 Habas et al15 developed a 4D atlas based on fetal brain MR images for the mid-second trimester (20–24 weeks). Gholipour et al16 constructed a spatiotemporal atlas for a wider range of GAs between 19 and 39 weeks. However, manual correction is still required after atlas-based segmentation.17 Therefore, it is critical to find an accurate and reliable fetal brain segmentation method that can minimize the intensive work and time involved in manual refinement and, more important, can reduce inter-/intrarater variability, thus improving reproducibility in large-cohort studies.

Deep convolutional neural networks (CNNs) have shown promising performance in fetal medical image analyses, including fetal brain segmentation. In addition to localizing ROIs (eg, SonoNet18), a fully convolutional network has been used to successfully segment the fetal abdomen,19 the whole fetal envelope,20 and the fetal body.21 A multiscale and fine-tuned CNN has been proposed for fetal left ventricle segmentation.22 Additional studies using CNNs focused on fetal brain extraction include the work by Rajchl et al23 called Deepcut, which was based on a CNN and a fully connected conditional random field. P-NET used CNNs with coarse and fine segmentation steps to locate the fetal brain.24 A gradient vector flow network has also been used.25 Likewise, 2D U-Net26 and multistate U-Net27 have been applied to fetal whole-brain extraction. Skull segmentation using a 2-stage CNN, in which the second stage comprises angle incidence and shadow casting maps has also been proposed.28 However, important segmentation that quantifies different brain tissue classes (eg, WM, GM, and CSF) is needed for a more comprehensive volumetric and morphologic assessment of the fetal brain.

A 2D U-Net method was proposed for multitissue fetal brain MR imaging segmentation.29 Khalili et al29 used data augmentation with simulated intensity inhomogeneity artifacts to enhance the robustness of the segmentation. This method, however, was trained on a very small cohort (n = 12). Recently, Payette et al30 evaluated several 2D segmentation methods using the Fetal Tissue Annotation and Segmentation Dataset. Of the deep learning models assessed, the combined IBBM model30 that included information from 3 separate 2D U-Net architectures (ie, axial, coronal, and sagittal) performed the best, suggesting the superiority of using information from 3 planes. 3D U-Net leverages the anatomic information in 3 directions and avoids segmentation failure due to section discontinuity that may arise with 2D models. One of the other models in the study, KispiU, directly compared a 2D with a 3D U-Net. Contrary to expectation, the 2D U-Net performed better; this result was attributed to the reduced number of training samples and the use of nonoverlapping patches in the 3D U-Net.

In this work, we implemented a 3D U-Net for the automatic segmentation of the fetal brain into multiple tissue classes. The proposed method was developed using 65 fetal MR imaging scans from healthy fetuses and was compared with a 4D atlas-based segmentation method. The performance of the 3D U-Net was also evaluated on the brain MR imaging scans of 41 fetuses diagnosed with CHD. We hypothesized that the proposed method would learn fetal brain anatomy in high-order space; thus, this approach could segment brain regions with superior accuracy compared with an atlas-based method. Moreover, we speculated that segmentation performance would be improved across GAs. Last, we hypothesized that the same method can be used to reliably segment the brains of clinically high-risk fetuses, such as those with CHD.

MATERIALS AND METHODS

In this study, MR imaging data were acquired as part of prospective fetal brain longitudinal studies between 2014 and 2017. Pregnant women with healthy or low-risk pregnancies and with fetuses diagnosed with CHD in utero were included in the study. Pregnant women with pregnancy-related complications, multiple pregnancies, known disorders, maternal medications or illicit drug use, claustrophobia, or non-MR imaging–safe implants were excluded. Fetuses with extracardiac anomalies or chromosomal abnormalities were excluded. The study was approved by the institutional review board of Children’s National Medical Center in Washington, DC. Written informed consent was obtained from all volunteers.

MR Imaging Data Acquisition and Preprocessing

MR images were collected on a 1.5T scanner (Discovery MR450, GE Healthcare). 2D T2-weighted images were acquired in coronal, sagittal, and axial planes with 3 repetitions using the following parameters: FOV = 32 cm, matrix size = 256 × 256, section thickness = 2 mm, TE = 160 ms, TR = 1100 ms. All pregnant women were scanned without sedation.

Images were reconstructed to a high-resolution 3D volume (resolution = 0.875  × 0.875  × 0.875 mm) using a validated section-to-volume method with motion correction.31 3D images were re-oriented manually. Skull stripping was performed using the FSL Brain Extraction Tool (http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/BET),32 and whole-brain masks were manually corrected as needed. Intensity inhomogeneities were corrected using the N4ITK algorithm.33

Deep Learning Segmentation with 3D U-Net

Fetal brain images were cropped at the edges and rescaled to a matrix size of 80 × 110 × 90. Image patches were randomly extracted with a size of 64 × 64 × 64. Patches were normalized by subtracting the mean and scaling by the SD so that values within the patch were between 0 and 1. A stride of 1 × 1 × 1 was used in the training patches, and 4 × 4 × 4 was used in the prediction patches. Furthermore, images were flipped along the left-right direction to generate additional data, and the labels of overlapped patched regions were decided by a majority voting approach in the prediction (Online Supplemental Data).

Compared with the standard U-Net, a parametric Rectified Linear Unit activation function was used. There were 96 initial features used. The Adam optimizer was used with a learning rate of 1e-4. Cross-entropy was used as the loss function. The model was trained for 20 epochs and was validated every 128 steps; the batch size was set at 4.

To optimize model performance, we tested image normalization and 3 augmentation methods, including no augmentation, left-right flip, and 3-direction flip. The tests were repeated 5 times to assess stability.

Performance Evaluation

The healthy fetal brain was segmented by registering a GA-matched T2 template from a 4D fetal brain atlas15 to the subject’s brain using Advanced Normalization Tools (ANTS; http://stnava.github.io/ANTs/).34 After transforming template tissue labels to the subject’s brain, segmentations were corrected manually by a senior physician-neuroscientist (J.D.A.-C.) with expertise in MR imaging–based fetal-neonatal brain segmentation. These manually refined images served as ground truth data. The 6 tissue classes of interest were the cortical gray matter (CGM), WM, CSF, deep gray matter (DGM), cerebellum, and brain stem (BS). The proposed 3D U-Net method was compared with segmentations generated by the Developing Brain Region Annotation With Expectation-Maximization (DRAW-EM) package (BioMedIa),35 a widely used and previously validated atlas-based method. The MR images of fetuses with CHD were segmented using the DRAW-EM method and were manually corrected by an MR imaging engineer (K.K.), highly trained in perinatal segmentation. Using a second atlas as the basis for the ground truth data for the CHD fetal brain segmentation allowed us to examine the performance of the proposed model with minimal bias.

The proposed method was evaluated on the healthy fetal data using 10-fold cross-validation. Performance of the 3D U-Net was compared with the atlas-based method. Outputs from both approaches were compared with ground truth data (ie, manually-corrected labels). Segmentation performance metrics, Dice score, 95% Hausdorff distance, sensitivity, and specificity for each brain tissue class were calculated and compared using the Wilcoxon signed-rank test. The trained 3D U-Net model was then used to segment brain MR imaging of fetuses with CHD to assess the generalizability of the model to the clinical milieu.

RESULTS

Study Population

The first data set included fetal brain MR images from healthy pregnancies. After we excluded images that contained severe motion artifacts, 65 fetal MR images from 54 fetuses (ie, 11 study participants underwent a second MR imaging 5–8 weeks later) between 24.4 and 39.4 weeks GA (mean, 32.5 [SD, 4.5] weeks) were evaluated. The second data set included brain MR images from 41 fetuses with CHD between 22.9 and 38.6 weeks GA (mean, 32.5 [SD, 3.8] weeks).

Performance with Augmentation and Normalization

The proposed method was more time-efficient than the atlas-based method. 3D U-Net segmentation took 2 minutes and 30 seconds to complete compared with 22 minutes for the atlas-based approach using 28 CPUs.

The proposed method had the best performance with image normalization and data augmentation using a left-right flip (Table 1). The training process using no augmentation resulted in a lower cross-entropy and a higher Dice score compared with the one using left-right flip augmentation. However, the validation performance was the opposite. This finding indicated that training without augmentation tended to overfit the data. With augmentation in 3 directions, the performance of training and validation was reduced, likely because of the unrealistic brain orientations produced. Furthermore, high Dice scores were achieved with normalized images, likely because of improved consistency among subjects and improved data balance from the reduced background.

View this table:
  • View inline
  • View popup
Table 1:

Average performance of 3D U-Net augmentation methods and normalization across 5 repetitions

Accuracy of the 3D U-Net

The proposed method showed high segmentation accuracy on our normative fetal sample. On the cross-validation of healthy fetuses, the proposed method yielded an average Dice score of 0.897 across the 6 brain regions compared with 0.806 for the atlas-based method. The Dice score per region was also significantly higher (P < .001) for the proposed method (Table 2).

View this table:
  • View inline
  • View popup
Table 2:

Dice scores per region

Figure 1A shows the segmentation results for a fetal brain at an early GA of 24 weeks and 5 days. The atlas-based method mislabeled the CSF as cortical gray matter, shown as light green when overlaid on the high-intensity signal of CSF in Fig 1 (upper row). The arrows on the sagittal/coronal images point to incorrectly labeled DGM, CSF, and CGM using the atlas-based approach. In contrast, the proposed method provided high consistency with the ground truth. Similarly, Fig 1B shows high consistency between the 3D U-Net and ground truth segmentation in a fetus at a late GA of 37 weeks and 2 days. In general, the proposed method resulted in smoother and continuous segmentation in the CGM compared with the atlas-based method.

FIG 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 1.

Comparison of segmentation methods on healthy fetuses of early and late GAs.

In all brain regions, the segmentation performance, measured with the Dice score and 95% Hausdorff distance, was significantly better (P < .001) for the proposed method compared with the atlas-based technique (Fig 2). Improved specificity and sensitivity scores were also noted in the CGM and WM regions for the 3D U-Net method.

FIG 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 2.

Regional comparisons between the proposed and conventional methods. The asterisk indicates P < .001. Cere indicates cerebellum.

Performance across GA

The proposed method showed consistent performance across GAs. As shown in Fig 3, the Dice score at each ROI was generally higher compared with the atlas-based method at each GA. In the CGM, the proposed method showed consistent performance from 24 to 39 weeks. In contrast, the atlas-based method resulted in reduced accuracy in the CSF and CGM at around 35 weeks, during which the secondary sulci develop. Furthermore, the conventional method resulted in reduced accuracy in the GM and WM regions around 26 weeks, during which early myelination occurs in the thalamus.

FIG 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 3.

Regional performance across GAs.

Performance in the Fetus with CHD

The proposed model trained on the healthy fetal brain provided high accuracy in fetuses with CHD, as shown in Fig 4. The proposed method provided an average Dice score of 0.831 (0.802 in CSF, 0.744 in CGM, 0.871 in WM, 0.815 in DGM, 0.887 in the cerebellum, and 0.869 in the BS. This result was 7% lower than that of healthy fetuses.

FIG 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 4.

Brain segmentation in a fetus with CHD. A, Manually corrected segmentation. B, The proposed method.

DISCUSSION

In this work, we implemented a 3D U-Net model for fetal brain MR imaging segmentation and demonstrated superior performance compared with the atlas-based technique. The tissue labels generated by the proposed method were highly consistent with manual segmentations and were more accurate compared with segmentations produced using a spatiotemporal atlas. The superiority of the proposed method likely stems from the learning model, which enabled the identification of high-dimensional and intrinsic features of the fetal brains. Notably, the proposed approach provided more consistent performance across the evaluated GA range (ie, 24–39 weeks) compared with the atlas-based method. We speculate that this will provide more reliable fetal segmentations for future large-scale studies. This method has since been implemented in an automatic image-processing pipeline that provides regional segmentation for quantitative fetal MR imaging measures in our clinical and research studies.

The proposed method demonstrated superior segmentation performance for all regions compared with conventional segmentation based on the Dice scores and the 95% Hausdorff distance. Similarly, specificity and sensitivity scores for CGM and WM regions were higher using our proposed method. The atlas-based method tended to overestimate the segmentation of the cerebellum and DGM so that the labels for these tissues extended beyond the boundaries defined in the ground truth segmentation, as shown in Fig 2. This feature resulted in more accurate label overlap with the ground truth and higher sensitivity scores but much lower specificity scores than the proposed method. In contrast, the atlas-based technique tended to cover smaller CSF regions than the ground truth. Therefore, the segmented region was always inside the ground truth, leading to a higher specificity score. However, this segmentation approach also missed some true CSF regions, which resulted in lower sensitivity. Thus, the differences between our sensitivity and specificity scores appear to demonstrate inaccuracies of the conventional atlas-based method.

Data quality and preprocessing highly influence the quality of the image segmentation. In this work, the same data sets and preprocessing pipelines were used.15 Thus, the difference in segmentation performance was likely due to the segmentation method, but not the data quality and preprocessing. We expect that the superior performance of the proposed method will be preserved, given alternative data sets and processing steps; this expectation, however, needs to be empirically evaluated in future studies.

This study has several limitations. First, we used fewer data sets for training compared with adult brain segmentation studies. However, with 65 scans from healthy fetuses, the size of our data set is larger compared with previous fetal brain MR imaging studies (12–50 fetal scans). Second, the data in this study were acquired from the same scanner using an identical protocol. Thus, the reproducibility of the proposed method on other scanners requires further evaluation. Third, there are minor differences in the atlases used in the manual and conventional segmentations. However, because the ground truth was manually corrected, such mismatches were assumed to be removed. The definitions of the CGM and WM were similar in both atlases. Therefore, the performance of the proposed method can be confirmed reliably in these regions. Fourth, the proposed model was trained using healthy fetal data and was tested on fetuses with CHD. Inherent differences between the 2 data sets likely account for the reduced performance of the proposed method on the clinical CHD cohort. Nevertheless, an improved model based on transfer learning should be investigated further.

CONCLUSIONS

Our work demonstrated the feasibility and superior performance of the 3D U-Net method for fetal brain segmentation. The proposed method provided faster, higher accuracy, and more consistent segmentation across GAs compared with the conventional method based on atlases. Such advantages can provide reliable information for morphologic analysis and accurate quantitative criteria to support radiologists’ clinical diagnoses. Furthermore, the proposed pipeline will promote a standardized procedure and significantly facilitates the fetal brain image processing for large-cohort studies.

Footnotes

  • This work was supported, in part, by R01HL116585 from the National Institutes of Health National Heart, Lung, and Blood Institute, NIH R21EB022309 from the National Institute of Biomedical Imaging and Bioengineering, and UL1TR001876 National Institutes of Health National Center for Advancing Translation Sciences.

  • Disclosure forms provided by the authors are available with the full text and PDF of this article at www.ajnr.org.

Indicates open access to non-subscribers at www.ajnr.org

References

  1. 1.↵
    1. Clouchoux C,
    2. Guizard N,
    3. Evans AC, et al
    . Normative fetal brain growth by quantitative in vivo magnetic resonance imaging. Am J Obstet Gynecol 2012;206:173.e1–8 doi:10.1016/j.ajog.2011.10.002 pmid:22055336
    CrossRefPubMed
  2. 2.↵
    1. Limperopoulos C
    . Disorders of the fetal circulation and the fetal brain. Clin Perinatol 2009;36:561–77 doi:10.1016/j.clp.2009.07.005 pmid:19732614
    CrossRefPubMed
  3. 3.↵
    1. Despotović I,
    2. Goossens B,
    3. Philips W
    . MRI segmentation of the human brain: challenges, methods, and applications. Comput Math Methods Med 2015;2015:450341 doi:10.1155/2015/450341 pmid:25945121
    CrossRefPubMed
  4. 4.↵
    1. Xue H,
    2. Srinivasan L,
    3. Jiang S, et al
    . Automatic segmentation and reconstruction of the cortex from neonatal MRI. Neuroimage 2007;38:461–77 doi:10.1016/j.neuroimage.2007.07.030 pmid:17888685
    CrossRefPubMed
  5. 5.↵
    1. Sui Y,
    2. Afacan O,
    3. Gholipour A, et al
    . Fast and high-resolution neonatal brain MRI through super-resolution reconstruction from acquisitions with variable slice selection direction. Front Neurosci 2021;15:636268 doi:10.3389/fnins.2021.636268 pmid:34220414
    CrossRefPubMed
  6. 6.↵
    1. Serag A,
    2. Aljabar P,
    3. Ball G, et al
    . Construction of a consistent high-definition spatio-temporal atlas of the developing brain using adaptive kernel regression. Neuroimage 2012;59:2255–65 doi:10.1016/j.neuroimage.2011.09.062 pmid:21985910
    CrossRefPubMed
  7. 7.↵
    1. Altaye M,
    2. Holland SK,
    3. Wilke M, et al
    . Infant brain probability templates for MRI segmentation and normalization. Neuroimage 2008;43:721–30 doi:10.1016/j.neuroimage.2008.07.060 pmid:18761410
    CrossRefPubMed
  8. 8.↵
    1. Shi F,
    2. Yap PT,
    3. Wu G, et al
    . Infant brain atlases from neonates to 1- and 2-year-olds. PLoS One 2011;6:e18746 doi:10.1371/journal.pone.0018746 pmid:21533194
    CrossRefPubMed
  9. 9.↵
    1. Shi F,
    2. Shen D,
    3. Yap PT, et al
    . CENTS: cortical enhanced neonatal tissue segmentation. Hum Brain Mapp 2011;32:382–96 doi:10.1002/hbm.21023 pmid:20690143
    CrossRefPubMed
  10. 10.↵
    1. Wang L,
    2. Shi F,
    3. Li G, et al
    . Segmentation of neonatal brain MR images using patch-driven level sets. Neuroimage 2014;84:141–58 doi:10.1016/j.neuroimage.2013.08.008 pmid:23968736
    CrossRefPubMed
  11. 11.↵
    1. Devi CN,
    2. Chandrasekharan A,
    3. Sundararaman VK, et al
    . Neonatal brain MRI segmentation: a review. Comput Biol Med 2015;64:163–78 doi:10.1016/j.compbiomed.2015.06.016 pmid:26189155
    CrossRefPubMed
  12. 12.↵
    1. Bouyssi-Kobar M,
    2. Du Plessis AJ,
    3. McCarter R, et al
    . Third trimester brain growth in preterm infants compared with in utero healthy fetuses. Pediatrics 2016;138:e20161640 doi:10.1542/peds.2016-1640 pmid:27940782
    Abstract/FREE Full Text
  13. 13.↵
    1. Thomason ME,
    2. Scheinost D,
    3. Manning JH, et al
    . Weak functional connectivity in the human fetal brain prior to preterm birth. Sci Rep 2017;7:39286 doi:10.1038/srep39286 pmid:28067865
    CrossRefPubMed
  14. 14.↵
    1. Makropoulos A,
    2. Counsell SJ,
    3. Rueckert D
    . A review on automatic fetal and neonatal brain MRI segmentation. Neuroimage 2018;170:231–48 doi:10.1016/j.neuroimage.2017.06.074 pmid:28666878
    CrossRefPubMed
  15. 15.↵
    1. Habas PA,
    2. Kim K,
    3. Rousseau F, et al
    . A spatio-temporal atlas of the human fetal brain with application to tissue segmentation. Med Image Comput Comput Assist Interv 2009;12(Pt 1):289–96 doi:10.1007/978-3-642-04268-3_36 pmid:20425999
    CrossRefPubMed
  16. 16.↵
    1. Gholipour A,
    2. Rollins CK,
    3. Velasco-Annis C, et al
    . A normative spatiotemporal MRI atlas of the fetal brain for automatic segmentation and analysis of early brain growth. Sci Rep 2017;7:476 doi:10.1038/s41598-017-00525-w pmid:28352082
    CrossRefPubMed
  17. 17.↵
    1. Vasung L,
    2. Rollins CK,
    3. Velasco-Annis C, et al
    . Spatiotemporal differences in the regional cortical plate and subplate volume growth during fetal development. Cereb Cortex 2020;30:4438–53 doi:10.1093/cercor/bhaa033 pmid:32147720
    CrossRefPubMed
  18. 18.↵
    1. Baumgartner CF,
    2. Kamnitsas K,
    3. Matthew J, et al
    . SonoNet: real-time detection and localisation of fetal standard scan planes in freehand ultrasound. IEEE Trans Med Imaging 2017;36:2204–15 doi:10.1109/TMI.2017.2712367 pmid:28708546
    CrossRefPubMed
  19. 19.↵
    1. Wu L,
    2. Xin Y,
    3. Li S, et al
    . Cascaded fully convolutional networks for automatic prenatal ultrasound image segmentation. In: Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017). Melbourne, Australia, April 18–21, 2007
  20. 20.↵
    1. Joskowicz L,
    2. Event A,
    3. Leo AJ
    . Interactive fetal envelope segmentation in MRI scans by real-time fine-tuning of a fully convolutional network. In: Proceedings of the Israel Radiological Association Annual Meeting. Eliat, Israel. October 23–25, 2019
  21. 21.↵
    1. Li Y,
    2. Xu R,
    3. Ohya J, et al
    . Automatic fetal body and amniotic fluid segmentation from fetal ultrasound images by encoder-decoder network with inner layers. Annu Int Conf IEEE Eng Med Biol Soc 2017;2017:1485–88 doi:10.1109/EMBC.2017.8037116 pmid:29060160
    CrossRefPubMed
  22. 22.↵
    1. Yu L,
    2. Guo Y,
    3. Wang Y, et al
    . Segmentation of fetal left ventricle in echocardiographic sequences based on dynamic convolutional neural networks. IEEE Trans Biomed Eng 2017;64:1886–95 doi:10.1109/TBME.2016.2628401 pmid:28113289
    CrossRefPubMed
  23. 23.↵
    1. Rajchl M,
    2. Lee MCH,
    3. Oktay O, et al
    . Deepcut: object segmentation from bounding box annotations using convolutional neural networks. IEEE Trans Med Imaging 2017;36:674–83 doi:10.1109/TMI.2016.2621185 pmid:27845654
    CrossRefPubMed
  24. 24.↵
    1. Mori K,
    2. Sakuma I,
    3. Sato Y, et al
    . eds. Medical image computing and computer-assisted intervention. In: Proceedings Part 1 of MICCAI 2013: 16th International Conference. Nagoya, Japan. September 22–26, 2013
  25. 25.↵
    1. Rong Y,
    2. Xiang D,
    3. Zhu W, et al
    . Deriving external forces via convolutional neural networks for biomedical image segmentation. Biomed Opt Express 2019;10:3800–14 doi:10.1364/BOE.10.003800 pmid:31452976
    CrossRefPubMed
  26. 26.↵
    1. Salehi SS,
    2. Hashemi SR,
    3. Velasco-Annis C, et al
    . Real-time automatic fetal brain extraction in fetal MRI by deep learning. In: Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). Washington, D.C. April 4–7, 2018 doi:10.1109/ISBI39256.2018
  27. 27.↵
    1. Suk HI,
    2. Liu M,
    3. Yan P, et al
    1. Lou J,
    2. Li D,
    3. Bui TD, et al
    . Automatic fetal brain extraction using multi-stage U-Net with deep supervision. In: Suk HI, Liu M, Yan P, et al. eds. Machine Learning in Medical Imaging: 10th International Workshop; MLMI 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 13, 2019, Proceedings. Vol 11861. Springer 2019:592–600
  28. 28.↵
    1. Cerrolaza JJ,
    2. Sinclair M,
    3. Li Y, et al
    . Deep learning with ultrasound physics for fetal skull segmentation. In: Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). Washington, D.C. April 4–7, 2018 doi:10.1109/ISBI39256.2018
  29. 29.↵
    1. Khalili N,
    2. Lessmann N,
    3. Turk E, et al
    . Automatic brain tissue segmentation in fetal MRI using convolutional neural networks. Magn Reson Imaging 2019;64:77–89 doi:10.1016/j.mri.2019.05.020 pmid:31181246
    CrossRefPubMed
  30. 30.↵
    1. Payette K,
    2. de Dumast P,
    3. Kebiri H, et al
    . An automatic multi-tissue human fetal brain segmentation benchmark using the Fetal Tissue Annotation Dataset. Sci Data 2021;8:167 doi:10.1038/s41597-021-00946-3 pmid:34230489
    CrossRefPubMed
  31. 31.↵
    1. Kainz B,
    2. Steinberger M,
    3. Wein W, et al
    . Fast volume reconstruction from motion corrupted stacks of 2D slices. IEEE Trans Med Imaging 2015;34:1901–13 doi:10.1109/TMI.2015.2415453 pmid:25807565
    CrossRefPubMed
  32. 32.↵
    1. Smith SM
    . Fast robust automated brain extraction. Hum Brain Mapp 2002;17:143–55 doi:10.1002/hbm.10062 pmid:12391568
    CrossRefPubMed
  33. 33.↵
    1. Tustison NJ,
    2. Avants BB,
    3. Cook PA, et al
    . N4ITK: Improved N3 bias correction with robust B-spline approximation. In: Proceedings of the 2010 IEEE International Symposium on Biomedical Imaging: From Nano to Macro. Rotterdam, The Netherlands. April 14–17, 2010 doi:10.1109/ISBI.2010.5490078
  34. 34.↵
    1. Avants BB,
    2. Tustison NJ,
    3. Song G, et al
    . A reproducible evaluation of ANTs similarity metric performance in brain image registration. Neuroimage 2011;54:2033–44 doi:10.1016/j.neuroimage.2010.09.025 pmid:20851191
    CrossRefPubMed
  35. 35.↵
    1. Makropoulos A,
    2. Gousias IS,
    3. Ledig C, et al
    . Automatic whole brain MRI segmentation of the developing neonatal brain. IEEE Trans Med Imaging 2014;33:1818–31 doi:10.1109/TMI.2014.2322280 pmid:24816548
    CrossRefPubMed
  • Received August 25, 2021.
  • Accepted after revision December 6, 2021.
  • © 2022 by American Journal of Neuroradiology
View Abstract
PreviousNext
Back to top

In this issue

American Journal of Neuroradiology: 43 (3)
American Journal of Neuroradiology
Vol. 43, Issue 3
1 Mar 2022
  • Table of Contents
  • Index by author
  • Complete Issue (PDF)
Advertisement
Print
Download PDF
Email Article

Thank you for your interest in spreading the word on American Journal of Neuroradiology.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Automated 3D Fetal Brain Segmentation Using an Optimized Deep Learning Approach
(Your Name) has sent you a message from American Journal of Neuroradiology
(Your Name) thought you would like to see the American Journal of Neuroradiology web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Cite this article
L. Zhao, J.D. Asis-Cruz, X. Feng, Y. Wu, K. Kapse, A. Largent, J. Quistorff, C. Lopez, D. Wu, K. Qing, C. Meyer, C. Limperopoulos
Automated 3D Fetal Brain Segmentation Using an Optimized Deep Learning Approach
American Journal of Neuroradiology Mar 2022, 43 (3) 448-454; DOI: 10.3174/ajnr.A7419

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
0 Responses
Respond to this article
Share
Bookmark this article
Automated 3D Fetal Brain Segmentation Using an Optimized Deep Learning Approach
L. Zhao, J.D. Asis-Cruz, X. Feng, Y. Wu, K. Kapse, A. Largent, J. Quistorff, C. Lopez, D. Wu, K. Qing, C. Meyer, C. Limperopoulos
American Journal of Neuroradiology Mar 2022, 43 (3) 448-454; DOI: 10.3174/ajnr.A7419
del.icio.us logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
Purchase

Jump to section

  • Article
    • Abstract
    • ABBREVIATIONS:
    • MATERIALS AND METHODS
    • RESULTS
    • DISCUSSION
    • CONCLUSIONS
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • Responses
  • References
  • PDF

Related Articles

  • No related articles found.
  • PubMed
  • Google Scholar

Cited By...

  • Normative range of MRI-derived fetal brain volume throughout gestation: a prospective study
  • Development of Gestational Age-Based Fetal Brain and Intracranial Volume Reference Norms Using Deep Learning
  • Crossref
  • Google Scholar

This article has not yet been cited by articles in journals that are participating in Crossref Cited-by Linking.

More in this TOC Section

  • Frontal Paraventricular Cysts
  • Sodium MRI in Pediatric Brain Tumors
  • FRACTURE MR in Congenital Vertebral Anomalies
Show more Pediatric Neuroimaging

Similar Articles

Advertisement

Indexed Content

  • Current Issue
  • Accepted Manuscripts
  • Article Preview
  • Past Issues
  • Editorials
  • Editor's Choice
  • Fellows' Journal Club
  • Letters to the Editor
  • Video Articles

Cases

  • Case Collection
  • Archive - Case of the Week
  • Archive - Case of the Month
  • Archive - Classic Case

More from AJNR

  • Trainee Corner
  • Imaging Protocols
  • MRI Safety Corner
  • Book Reviews

Multimedia

  • AJNR Podcasts
  • AJNR Scantastics

Resources

  • Turnaround Time
  • Submit a Manuscript
  • Submit a Video Article
  • Submit an eLetter to the Editor/Response
  • Manuscript Submission Guidelines
  • Statistical Tips
  • Fast Publishing of Accepted Manuscripts
  • Graphical Abstract Preparation
  • Imaging Protocol Submission
  • Evidence-Based Medicine Level Guide
  • Publishing Checklists
  • Author Policies
  • Become a Reviewer/Academy of Reviewers
  • News and Updates

About Us

  • About AJNR
  • Editorial Board
  • Editorial Board Alumni
  • Alerts
  • Permissions
  • Not an AJNR Subscriber? Join Now
  • Advertise with Us
  • Librarian Resources
  • Feedback
  • Terms and Conditions
  • AJNR Editorial Board Alumni

American Society of Neuroradiology

  • Not an ASNR Member? Join Now

© 2025 by the American Society of Neuroradiology All rights, including for text and data mining, AI training, and similar technologies, are reserved.
Print ISSN: 0195-6108 Online ISSN: 1936-959X

Powered by HighWire