Skip to main content
Advertisement

Main menu

  • Home
  • Content
    • Current Issue
    • Accepted Manuscripts
    • Article Preview
    • Past Issue Archive
    • Video Articles
    • AJNR Case Collection
    • Case of the Week Archive
    • Case of the Month Archive
    • Classic Case Archive
  • Special Collections
    • AJNR Awards
    • Low-Field MRI
    • Alzheimer Disease
    • ASNR Foundation Special Collection
    • Photon-Counting CT
    • View All
  • Multimedia
    • AJNR Podcasts
    • AJNR SCANtastic
    • Trainee Corner
    • MRI Safety Corner
    • Imaging Protocols
  • For Authors
    • Submit a Manuscript
    • Submit a Video Article
    • Submit an eLetter to the Editor/Response
    • Manuscript Submission Guidelines
    • Statistical Tips
    • Fast Publishing of Accepted Manuscripts
    • Graphical Abstract Preparation
    • Imaging Protocol Submission
    • Author Policies
  • About Us
    • About AJNR
    • Editorial Board
    • Editorial Board Alumni
  • More
    • Become a Reviewer/Academy of Reviewers
    • Subscribers
    • Permissions
    • Alerts
    • Feedback
    • Advertisers
    • ASNR Home

User menu

  • Alerts
  • Log in

Search

  • Advanced search
American Journal of Neuroradiology
American Journal of Neuroradiology

American Journal of Neuroradiology

ASHNR American Society of Functional Neuroradiology ASHNR American Society of Pediatric Neuroradiology ASSR
  • Alerts
  • Log in

Advanced Search

  • Home
  • Content
    • Current Issue
    • Accepted Manuscripts
    • Article Preview
    • Past Issue Archive
    • Video Articles
    • AJNR Case Collection
    • Case of the Week Archive
    • Case of the Month Archive
    • Classic Case Archive
  • Special Collections
    • AJNR Awards
    • Low-Field MRI
    • Alzheimer Disease
    • ASNR Foundation Special Collection
    • Photon-Counting CT
    • View All
  • Multimedia
    • AJNR Podcasts
    • AJNR SCANtastic
    • Trainee Corner
    • MRI Safety Corner
    • Imaging Protocols
  • For Authors
    • Submit a Manuscript
    • Submit a Video Article
    • Submit an eLetter to the Editor/Response
    • Manuscript Submission Guidelines
    • Statistical Tips
    • Fast Publishing of Accepted Manuscripts
    • Graphical Abstract Preparation
    • Imaging Protocol Submission
    • Author Policies
  • About Us
    • About AJNR
    • Editorial Board
    • Editorial Board Alumni
  • More
    • Become a Reviewer/Academy of Reviewers
    • Subscribers
    • Permissions
    • Alerts
    • Feedback
    • Advertisers
    • ASNR Home
  • Follow AJNR on Twitter
  • Visit AJNR on Facebook
  • Follow AJNR on Instagram
  • Join AJNR on LinkedIn
  • RSS Feeds

AJNR Awards, New Junior Editors, and more. Read the latest AJNR updates

Research ArticleArtificial Intelligence

Empowering Data Sharing in Neuroscience: A Deep Learning Deidentification Method for Pediatric Brain MRIs

Ariana M. Familiar, Neda Khalili, Nastaran Khalili, Cassidy Schuman, Evan Grove, Karthik Viswanathan, Jakob Seidlitz, Aaron Alexander-Bloch, Anna Zapaishchykova, Benjamin H. Kann, Arastoo Vossough, Phillip B. Storm, Adam C. Resnick, Anahita Fathi Kazerooni and Ali Nabavizadeh
American Journal of Neuroradiology May 2025, 46 (5) 964-972; DOI: https://doi.org/10.3174/ajnr.A8581
Ariana M. Familiar
aFrom the Center for Data-Driven Discovery in Biomedicine (D3b) (A.M.F., Neda K., Nastaran K., K.V., A.V., P.B.S., A.C.R., A.F.K., A.N.), Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania
bDepartment of Neurosurgery (A.M.F., Neda K., Nastaran K., K.V., P.B.S., A.C.R., A.F.K), Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Ariana M. Familiar
Neda Khalili
aFrom the Center for Data-Driven Discovery in Biomedicine (D3b) (A.M.F., Neda K., Nastaran K., K.V., A.V., P.B.S., A.C.R., A.F.K., A.N.), Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania
bDepartment of Neurosurgery (A.M.F., Neda K., Nastaran K., K.V., P.B.S., A.C.R., A.F.K), Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Nastaran Khalili
aFrom the Center for Data-Driven Discovery in Biomedicine (D3b) (A.M.F., Neda K., Nastaran K., K.V., A.V., P.B.S., A.C.R., A.F.K., A.N.), Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania
bDepartment of Neurosurgery (A.M.F., Neda K., Nastaran K., K.V., P.B.S., A.C.R., A.F.K), Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Cassidy Schuman
cSchool of Engineering and Applied Science (C.S., E.G.), University of Pennsylvania, Philadelphia, Pennsylvania
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Evan Grove
cSchool of Engineering and Applied Science (C.S., E.G.), University of Pennsylvania, Philadelphia, Pennsylvania
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Karthik Viswanathan
aFrom the Center for Data-Driven Discovery in Biomedicine (D3b) (A.M.F., Neda K., Nastaran K., K.V., A.V., P.B.S., A.C.R., A.F.K., A.N.), Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania
bDepartment of Neurosurgery (A.M.F., Neda K., Nastaran K., K.V., P.B.S., A.C.R., A.F.K), Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jakob Seidlitz
dDepartment of Child and Adolescent Psychiatry and Behavioral Science (J.S., A.A.-B.), The Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania
eDepartment of Psychiatry (J.S., A.A.-B.), University of Pennsylvania, Philadelphia, Pennsylvania
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Aaron Alexander-Bloch
dDepartment of Child and Adolescent Psychiatry and Behavioral Science (J.S., A.A.-B.), The Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania
eDepartment of Psychiatry (J.S., A.A.-B.), University of Pennsylvania, Philadelphia, Pennsylvania
f Lifespan Brain Institute at the Children’s Hospital of Philadelphia and University of Pennsylvania (A.A.-B.), Philadelphia, Pennsylvania
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Anna Zapaishchykova
gArtificial Intelligence in Medicine (AIM) Program (A.Z., B.H.K.), Mass General Brigham, Harvard Medical School, Boston, Massachusetts
hDepartment of Radiation Oncology (A.Z., B.H.K.), Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Benjamin H. Kann
gArtificial Intelligence in Medicine (AIM) Program (A.Z., B.H.K.), Mass General Brigham, Harvard Medical School, Boston, Massachusetts
hDepartment of Radiation Oncology (A.Z., B.H.K.), Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Arastoo Vossough
aFrom the Center for Data-Driven Discovery in Biomedicine (D3b) (A.M.F., Neda K., Nastaran K., K.V., A.V., P.B.S., A.C.R., A.F.K., A.N.), Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania
iDivision of Radiology (A.V.), Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania
jDepartment of Radiology, Perelman School of Medicine (A.V., A.N.), University of Pennsylvania, Philadelphia, Pennsylvania
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Arastoo Vossough
Phillip B. Storm
aFrom the Center for Data-Driven Discovery in Biomedicine (D3b) (A.M.F., Neda K., Nastaran K., K.V., A.V., P.B.S., A.C.R., A.F.K., A.N.), Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania
bDepartment of Neurosurgery (A.M.F., Neda K., Nastaran K., K.V., P.B.S., A.C.R., A.F.K), Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Adam C. Resnick
aFrom the Center for Data-Driven Discovery in Biomedicine (D3b) (A.M.F., Neda K., Nastaran K., K.V., A.V., P.B.S., A.C.R., A.F.K., A.N.), Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania
bDepartment of Neurosurgery (A.M.F., Neda K., Nastaran K., K.V., P.B.S., A.C.R., A.F.K), Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Anahita Fathi Kazerooni
aFrom the Center for Data-Driven Discovery in Biomedicine (D3b) (A.M.F., Neda K., Nastaran K., K.V., A.V., P.B.S., A.C.R., A.F.K., A.N.), Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania
bDepartment of Neurosurgery (A.M.F., Neda K., Nastaran K., K.V., P.B.S., A.C.R., A.F.K), Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania
kAI2D Center for AI and Data Science for Integrated Diagnostics (A.F.K.), University of Pennsylvania, Philadelphia, Pennsylvania
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Anahita Fathi Kazerooni
Ali Nabavizadeh
aFrom the Center for Data-Driven Discovery in Biomedicine (D3b) (A.M.F., Neda K., Nastaran K., K.V., A.V., P.B.S., A.C.R., A.F.K., A.N.), Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania
jDepartment of Radiology, Perelman School of Medicine (A.V., A.N.), University of Pennsylvania, Philadelphia, Pennsylvania
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Ali Nabavizadeh
  • Article
  • Figures & Data
  • Supplemental
  • Info & Metrics
  • Responses
  • References
  • PDF
Loading

Abstract

BACKGROUND AND PURPOSE: Privacy concerns, such as identifiable facial features within brain scans, have hindered the availability of pediatric neuroimaging data sets for research. Consequently, pediatric neuroscience research lags adult counterparts, particularly in rare disease and under-represented populations. The removal of face regions (image defacing) can mitigate this; however, existing defacing tools often fail with pediatric cases and diverse image types, leaving a critical gap in data accessibility. Given recent National Institutes of Health data sharing mandates, novel solutions are a critical need.

MATERIALS AND METHODS: To develop an artificial intelligence (AI)-powered tool for automatic defacing of pediatric brain MRIs, deep learning methodologies (nnU-Net) were used by using a large, diverse multi-institutional data set of clinical radiology images. This included multiparametric MRIs (T1-weighted [T1W], T1W-contrast-enhanced, T2-weighted [T2W], T2W-FLAIR) with 976 total images from 208 patients with brain tumor (Children’s Brain Tumor Network, CBTN) and 36 clinical control patients (Scans with Limited Imaging Pathology, SLIP) ranging in age from 7 days to 21 years old.

RESULTS: Face and ear removal accuracy for withheld testing data were the primary measure of model performance. Potential influences of defacing on downstream research usage were evaluated with standard image processing and AI-based pipelines. Group-level statistical trends were compared between original (nondefaced) and defaced images. Across image types, the model had high accuracy for removing face regions (mean accuracy, 98%; n=98 subjects/392 images), with lower performance for removal of ears (73%). Analysis of global and regional brain measures (SLIP cohort) showed minimal differences between original and defaced outputs (mean rS = 0.93, all P < .0001). AI-generated whole brain and tumor volumes (CBTN cohort) and temporalis muscle metrics (volume, cross-sectional area, centile scores; SLIP cohort) were not significantly affected by image defacing (all rS > 0.9, P < .0001).

CONCLUSIONS: The defacing model demonstrates efficacy in removing facial regions across multiple MRI types and exhibits minimal impact on downstream research usage. A software package with the trained model is freely provided for wider use and further development (pediatric-auto-defacer; https://github.com/d3b-center/pediatric-auto-defacer-public). By offering a solution tailored to pediatric cases and multiple MRI sequences, this defacing tool will expedite research efforts and promote broader adoption of data sharing practices within the neuroscience community.

ABBREVIATIONS:

AI
artificial intelligence
CBTN
Children’s Brain Tumor Network
CE
contrast-enhanced
CHOP
Children’s Hospital of Philadelphia
CSA
cross-sectional area
LH
left hemisphere
NIH
National Institutes of Health
RH
right hemisphere
SEM
standard error of the mean
SLIP
Scans with Limited Imaging Pathology
T1W
T1-weighted
T2W
T2-weighted
TMT
temporalis muscle thickness

SUMMARY

PREVIOUS LITERATURE:

Scientific data sharing promotes reproducibility of research and translation of findings into clinical care. Several centralized repositories have enabled broad sharing of large-scale imaging data sets; however, pediatric data sets have lagged behind their adult counterparts, and neuroimaging data are particularly challenging to share due to privacy concerns, because brain scans can reveal identifiable features. Existing “defacing” tools to remove face regions are primarily designed for adult scans, and often struggle with pediatric images and do not generalize to a variety of sequence types. This work introduces the first tool (pediatric-auto-defacer) specifically for removing facial features from multiparametric pediatric MRIs, addressing a critical gap in data sharing for neuroscience research.

KEY FINDINGS:

A model was developed to automatically remove facial regions from brain MRIs for anonymization purposes. It performs well on several sequence types across various acquisition parameters, and does not over-remove brain tissue. Based on testing, defacing does not affect downstream analytical pipelines (eg, image preprocessing or measured group-level trends).

KNOWLEDGE ADVANCEMENT:

To facilitate broad sharing of pediatric neuroimaging data sets, a robust, automatic deidentification tool is provided to ease the burden on research teams to prepare and release imaging data while protecting patient privacy.

Data sharing is a critical component of research endeavors as it lends to scientific transparency and data reuse. For the study of rare diseases, data sharing is crucial for gathering a meaningful group of samples to enable statistical comparisons in the given patient population. Due to calls to action across disciplines, data sharing plans have recently become a mandate for National Institutes of Health (NIH)-funded projects and deposit of data files to centralized repositories is now a requirement by many scientific journals for publication. Such efforts will facilitate the reproducibility of research studies and consequently their translation into real-world applications such as clinical care contexts, as well as bolster the inclusion of historically under-represented populations, which can mitigate bias in developed models and support fair artificial intelligence (AI) in health care.1

In alignment with FAIR2 principles, several imaging data repositories have been established such as the Alzheimer Disease Neuroimaging Initiative3 and the National Cancer Institute’s The Cancer Imaging Archive4 and Imaging Data Commons, which provide effective data discovery and accessibility. While several large-scale, multi-institutional imaging data sets exist, such as the National Lung Screening Trial (NLST) for lung cancer (chest CTs from more than 26,000 patients)5 and the Breast Cancer Screening Digital Breast Tomosynthesis (breast mammograms from 5060 patients),6 comparable radiology data sets in neuroscience fields have lagged behind their counterparts, primarily due to greater difficulty of removing identifying information from brain (head and neck) scans. Brain images can be inherently identifiable due to the presence of an individual’s face, and their release can jeopardize patient privacy. Studies have shown brain MRIs can be used to identify subjects by matching to their photograph,7,8 even after face regions have been blurred.9 “Defacing,” or the removal of face regions in an image, is one way to mitigate this issue, and several defacing software tools for structural brain MRIs have been developed (eg, mri_deface10, pydeface11, fsl_deface,12 and others13,14), some of which have less impact on downstream processing than others.15,16 That said, existing tools do not typically perform well on pediatric cases,17 particularly in young children and infants, likely due to differences in brain and face anatomy across developmental stages. For example, 1 study found that FSL’s defacing removed brain tissue in most children (ages 8–11) and in some young adult (ages 19–31) cases, and had worse performance for eyes and mouth removal compared with adults.18 FreeSurfer had better performance for face removal without impacting brain tissue in the same cases, however, it was more invasive in removing intraorbital and brainstem structures. Many tools rely on alignment to standardized face or brain atlases created with adult MRIs, and therefore fail to properly deface pediatric scans. Additionally, most are developed for T1-weighted (T1W) sequences, and there remains a need for accessible tools for defacing additional sequence types collected under standard clinical imaging protocols (eg, T2-weighted [T2W]).

Pediatric data sharing has been significantly hindered by regulatory barriers related to privacy concerns, creating a critical unmet need for public imaging data sets. Herein, we build a tool to enable automatic removal of face regions from multiple types of pediatric MRIs, with the goal of facilitating data sharing across neuroscience fields. This is, to the best of our knowledge, the first available pediatric defacing tool. To address the need for a tool that can operate across multiparametric MRIs, we use a large, multi-institutional clinical radiology data set (Children’s Brain Tumor Network [CBTN]19) with deep learning AI methods to develop a model for minimally invasive defacing. Our model was trained and validated with 208 pediatric brain tumor subjects (832 total images) and 36 clinical control subjects (144 images from the Scans with Limited Imaging Pathology [SLIP] cohort20), with 4 image sequences included per subject (T1W, T1W contrast-enhanced [T1W-CE], T2W, and T2W-FLAIR sequences). Images were acquired through clinical protocols, and thus capture real-world heterogeneity in scanner and image acquisition properties.

MATERIALS AND METHODS

Patient Cohorts

Retrospective data were collected from the CBTN,19 a large-scale, multi-institutional repository of longitudinal clinical, imaging, genomic, and other paired data.21 Two hundred eight subjects were selected based on imaging availability and inclusion of a range of ages at the time of imaging (median age 8; minimum = 0.35, maximum = 21.71 years) and cancer histologies (Fig 1, Table, Supplemental Data). MRI scans were unprocessed images from treatment-naïve clinical examinations (T1W, T1W-CE, T2W, and T2W-FLAIR). All subjects had histologically confirmed pediatric brain tumors.

FIG 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 1.

Diagram of overall study workflow. Data cohorts included brain tumor (CBTN) and nonbrain tumor control (SLIP). Initial ground truth face masks were created with MiDeface and manually edited. A 3D deep learning model was trained with the nnUNet framework, by using a single image as input, and tested on withheld data. The impact of defacing on downstream image processing and AI-based pipelines was evaluated with CBTN and SLIP testing data. The trained model is provided in an open-source software container on GitHub.

View this table:
  • View inline
  • View popup

Patient characteristics in the studied cohorts

To test generalizability to nonbrain tumor patients (clinical control group), a cohort of 40 subjects with available images from the SLIP20 data set were selected to match the general distributions of age and sex of the CBTN cohort. Thirty-six subjects had sufficient images and were included in the main analyses.

Ground Truth Creation with Semiautomated Face Mask Segmentation

Preliminary face masks were generated for each image by using the MiDeface22 algorithm and then were manually edited. Of the 976 images, 507 (52%) were found to be inaccurately defaced and were manually revised by using the ITK-SNAP23 software (by authors C.S., E.G.; Supplemental Data). The criteria for an accurate face mask was that any brain region or temporalis muscle (given potential implications as a biomarker24) were not affected and identifiable facial features, including eyes, nose, mouth, and ears were fully included. Common corrections included restoring brain voxels, particularly in the right prefrontal cortex, and properly realigning the face mask to the subject’s face.

AI Deep Learning Model Development

CBTN images were stratified into training/validation and testing sets (80–20 split) based on demographics (age, sex, race) and histology (Table). nnUNet25 v1 (https://github.com/MIC-DKFZ/nnUNet/tree/nnunetv1; 3D full resolution; Supplemental Data) was used with 5-fold cross-validation, initial learning rate 0.01, stochastic gradient descent with Nesterov momentum (μ = 0.99), and number of epochs = 1000 × 250 minibatches. Each unprocessed T1W/T1W-CE/T2W/FLAIR sequence was treated as a separate input. The set of 4 images for each subject could be used for either training or validation but not both (ie, images from a single subject could not be split into training and validation within a given fold). Given a large percentage of the CBTN scans were from Children’s Hospital of Philadelphia (CHOP), we additionally split the testing cohort into “internal” (CHOP) and “external” (4 separate institutions) testing data sets.

Defacing Accuracy

Model performance was evaluated with (previously unseen) images in the testing cohorts. Traditional performance scores such as the Sørensen-Dice score (spatial overlap between model predicted mask and ground truth mask), sensitivity (percent of pixels correctly identified by the model), and 95% Hausdorff distance metrics (distances between nearest voxels in the predicted and ground truth masks, of which 95% of voxels fell within) were generated.

As an additional assessment of defacing accuracy, 2 raters (authors Neda K. and Nastaran K.) evaluated model performance in the testing cohorts. For each image, they rated coverage of the eyes and ears (separately for left and right), mouth, and nose with either: 1 (fully covered), 0.75 (approximately 75% masked), 0.5 (50% masked), 0.25 (25% masked), or 0 (not masked at all); and whether any brain tissue was removed (yes/no). After initial independent review, images with disagreement were reviewed until a consensus was reached.

Impact of Defacing on Downstream Analytics

Given the overarching aim to facilitate data sharing of brain MRIs for research purposes, it is essential any modification of the images by defacing minimally impacts downstream analysis. Several methods were used to assess this by using standard image processing steps, in both the brain tumor (CBTN) and nonbrain tumor (SLIP) groups separately.

Preprocessing and Application of Pretrained AI Models.

For each subject in the CBTN testing cohorts, T1W, T2W, and FLAIR sequence images were coregistered with their corresponding T1W-CE sequence and resampled to an isotropic resolution of 1 mm3 based on the anatomic SRI24 atlas26 by using the Greedy algorithm (https://github.com/pyushkevich/greedy)27 in the Cancer Imaging Phenomics Toolkit open-source software v.1.8.1 (CaPTk, https://www.cbica.upenn.edu/captk).28 Accuracy of coregistration was confirmed by visual assessment of the 4 images.

Preprocessed data for each subject were then input into existing pretrained AI models for automatic brain tissue extraction and tumor subregion segmentation (https://github.com/d3b-center/peds-brain-seg-pipeline-public).29,30 This was performed once by using the original images (nondefaced), and once by using the defaced images. Resulting brain and tumor segmentation masks were compared between these conditions.

Cortical and Subcortical Volumetric Measures.

For 31 subjects in the SLIP testing cohort, their T1W scan was input to FreeSurfer’s reconstruction pipeline (recon-all; https://surfer.nmr.mgh.harvard.edu/fswiki/recon-all)31 to generate cortical and subcortical structure parcellations (5 subjects were excluded due to insufficient T1W image quality). This was performed once with original images and once with defaced images. Resulting volumetric measurements based on the parcellations were compared between these conditions.

We additionally used an existing AI-powered pipeline to estimate the thickness (temporalis muscle thickness [TMT]) and cross-sectional area (CSA) of the temporalis muscle (https://doi.org/10.5281/zenodo.8428986)24 for 28 SLIP subjects (5 subjects excluded for insufficient quality T1W images, 3 subjects excluded for being younger than 3 years of age as required by the tool).

Please see Supplemental Data for a description of all statistical comparisons and a CLAIM checklist to indicate alignment with the proposed methodologic guidelines recommended for AI in medical imaging.32⇓–34

RESULTS

Defacing Accuracy

Across images, Dice scores indicated decent spatial overlap between manual ground truth and model-predicted face masks in the internal (mean = 0.78, median = 0.8, standard error of the mean [SEM] = 0.008), external (mean = 0.75, median = 0.78, SEM = 0.02), and clinical control (mean = 0.75, median = 0.77, SEM = 0.01) groups (Fig 2). Repeated-measures ANOVAs confirmed there was no effect of image type (T1W/T1W-CE/T2W/FLAIR) on Dice scores in the internal (F(3,108) = 0.38, P = .77) and external (F(3,72) = 1.8, P = .16) cohorts, however there was a significant effect in the clinical control group (F(3,105) = 6.14, P = .007) with better model performance for T2W and FLAIR compared with T1W and T1W-CE (Supplemental Data). Pearson correlations showed no effect of age on Dice scores averaged across image types (internal: r(35) = 0.19, P = .25; external: r(23) = 0.29, P = .17; control: r(34) = 0.28, P = .095; Supplemental Data). One-way ANOVAs indicated no effect of sex (internal: F(1,35) = 2.0, P = .17; external: F(1,23) = 0.28, P = .6; control: F(1,34) = 3.17, P = .08) or race (internal: F(3,33) = 0.18, P = .911; external: F(2,22) = 0.61, P = .551; control: F(2,32) = 1.07, P = .356) on Dice scores, and no effect of histopathologic diagnosis (internal: F(4, 32) = 0.442, P = .777; external: F(1, 23) = 0.377, P = .545) or general tumor location (internal: F(4,32) = 0.837, P = .512; external: F(3,21) = 0.1, P = .959) in the CBTN testing cohorts.

FIG 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 2.

Model performance results. Plots show aggregate metrics across image types for each testing cohort (see Supplemental Data for results for image type separately); error bars represent SEM. A, Standard metrics for segmentation evaluation including Dice similarity, sensitivity, and 95% Hausdorff distance. B, Average performance ratings based on visual inspection by 2 raters (1 = fully covered, 0.75 = approximately 75% masked, 0.5 = 50% masked, 0.25 = 25% masked, 0 = not masked at all).

On further review, it was determined that the spatial metrics were not an ideal measure of defacing performance due to variability in extension of the face mask into the air in front of the face in the ground truth segmentations (Fig 3, Supplemental Data). To more accurately assess model performance, 2 raters (Neda K., Nastaran K.) reviewed each defaced image in the internal, external, and clinical control testing groups. After applying the model-predicted face masks to the corresponding images, the raters were instructed to score the model’s accuracy in masking (coverage of) the left eye, right eye, nose, mouth, left ear, and right ear separately (1 = fully masked, 0.75/0.5/0.25 = % partially masked, 0 = not masked) for each image separately.

FIG 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 3.

Representative example images of model predicted versus manual ground truth segmentation masks. Subjects shown with high (left box; T1W-CE sequence) and low (right box; FLAIR sequence) Dice similarity scores between the model predicted (upper row) and manual ground truth (lower row) face masks. This illustrates how Dice score, although a common metric for such segmentation tasks, was not an accurate measure of model performance in the present study, as ground truth masks were variable in their extension into space in front of the face (particularly due to “MiDeface” lettering imposed by the MiDeface Freesurfer tool that was used to generate initial face masks).

Across facial features, the average rated accuracy of model defacing was high for each testing set (means: internal = 0.93, external = 0.86, control = 0.89). Composite scores combining the eyes, mouth, and nose ratings indicated high masking performance for these features (Fig 2, Supplemental Data; internal = 0.97, external = 0.98, control = 0.98), while performance for masking the ears was lower (internal = 0.85, external = 0.62, control = 0.72). For every image, both raters reported no brain voxels were impacted by defacing in the internal, external, or clinical control groups. Repeated-measures ANOVAs showed a significant effect of image type on defacing performance in the clinical control group (F(3,75) = 10.8, P < .0001), with higher average ratings for T1W (M = 0.91) and T1W-CE (M = 0.91) compared with T2W (M = 0.89) and FLAIR (M = 0.86); but no effect of image type in the internal (F(3,108) = 1.17, P = .33) or external (F(3,72) = 0.32, P = .81) groups. Average rating across subjects and image types for each feature is displayed in the Supplemental Data.

Assessing Impact of Defacing on Downstream Analytics

Preprocessing and Application of Pretrained AI Models.

Defaced and original (nondefaced) images underwent preprocessing and were input to pretrained AI tools to assess any impact of defacing on standard downstream analysis by using all 4 image sequences (T1W/T1W-CE/T2W/FLAIR). Visual inspection showed equivalent coregistration performance between defaced and original images. For the pediatric brain tumor test data sets, the volumes of AI-generated brain masks were equivalent between defaced and nondefaced images (internal: rS(35) > 0.99, P < .0001; external: rS(23) > 0.99, P < .0001; Fig 4, upper and middle). AI-generated tumor segmentations were also unaffected by defacing, indicated by equivalent volumes of contrast-enhancing tumor, nonenhancing tumor, cystic, and edema subregions (internal: all subregions rS(35) > 0.99, P < .0001; external: all subregions rS(23) > 0.99, P < .0001; Fig 4, Supplemental Data).

FIG 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 4.

Testing the impact of defacing on AI-generated volumetrics. Each point represents 1 subject; the red line indicates a linear trend. Upper/middle: Comparison of tumor subregion volumes between defaced (x-axis) and original (y-axis) images in pediatric brain tumor subjects. There was very high agreement between brain and tumor segmentation volumes. Lower: Comparison of estimated TMT, area (CSA), and TMT centile scores between defaced (x-axis) and original (y-axis) T1W images from the clinical control group (point colors indicate age). Correlations indicated very high agreement between TMT, CSA, and resulting TMT centile scores.

Cortical and Subcortical Volumetric Measures.

For 31 subjects in the clinical control (SLIP) cohort, we further investigated any impact of defacing on derived brain measures from T1W images by using a standard anatomic reconstruction pipeline (FreeSurfer recon-all). There was very high agreement between estimated global and regional measures, with all comparisons between original and defaced images being positively significant (mean rS(29) = 0.93, all P < .0001; Supplemental Data). Correlations were above 0.9 for 48 out of 58 measures. Regions with the lowest agreement were the left and right cerebellum white matter (left: rS(29) = 0.71, P < .0001; right: rS(29) = 0.69, P < .0001). Nine global measurements (cortex, cerebral white matter, subcortical gray matter, total gray matter, total brain [including cerebellum], total brain excluding ventricles [surface], total brain excluding ventricles [volume], CSF, and total intracranial volumes) were equivalent between original and defaced (rS(29) > 0.86). Paired t tests indicated no significant differences between original and defaced brain measures (Supplemental Data), with the exception of the right vessel (original = 11.3, SEM = 1.38; defaced M = 14.7, SEM = 2.19; t(30) = −2.32, P = .03) and the right hippocampus (original M = 3940.8, SEM = 101; defaced M = 3972.8, SEM = 101; t(30) = −2.36, P = .03), which were estimated to be slightly larger on average in the defaced compared with original images. Overall, these results indicate defacing had minimal impact on cortical and subcortical volumetric assessments by using a standard processing pipeline, which aligns with previous report of minimal effects of defacing tools on global FreeSurfer measurements.17

To examine the impact of defacing on regional measurements in close proximity to the face, we extracted TMT (mm) and CSA measurements (SLIP cohort ages >3 years; n=28) by using an existing AI-powered pipeline24 with T1W images. Notably, TMT scores have been implicated as a predictive marker for sarcopenia across patient populations.35⇓⇓–38 Spearman correlations showed high agreement of estimated TMT (rS26) = 0.96, all P < .0001) and CSA (left hemisphere [LH]: rS26) = 0.96, P < .0001; right hemisphere [RH]: rS26) = 0.97, P < .0001; Fig 4, lower) between defaced and original images. Paired t tests indicated no difference in TMT volumes between original and defaced images (t(27) = −1.8, P = .08), but a significant difference in CSA (LH: t(27) = −3.74, P < .0001; RH: t(27) = −4.79, P = .0009) with lower surface area estimates for the defaced (LH: M = 306.2, SEM = 30; RH: M = 314.7, SEM = 33) compared with original (LH: M = 339.9, SEM = 35; RH: M = 350.5, SEM = 37) images. Resulting centile scores based on TMT, age, and sex (compared with TMT distributions estimated from large-scale data sets24) were not significantly affected by defacing (rS(26) = 0.9, P < .0001; t(27) = −0.97, P = .34).

DISCUSSION

Data sharing of MRIs is crucial to transparent and reproducible research, particularly in the era of predictive AI that requires ample volumes of representative data. Widely available pediatric imaging data sets are needed to accelerate discoveries in neuroscience, particularly in rare disease contexts. To this end, we aim to enable MRI data sharing through the development of an open-source de-identification tool for the automatic removal of identifiable facial features. A deep learning model for face masking was trained by using a large, multi-institutional data set of clinically acquired, multiparametric MRIs (CBTN).

The trained model had strong performance removing the face (eyes, nose, mouth) in an unseen data set, with adequate, though lower, performance on ear removal. This is potentially due to a lack of presence of ears in some images in the training data set (limited field of view). Notably, although the model was trained on data from patients with brain tumor, it could generalize to a separate data set of clinically matched controls indicating its potential use across anatomically normal and disease-impacted cohorts. To enable wider usage by the community, the trained model is publicly provided as an open-source software package, and we encourage further model development to extend the model to additional disease and healthy populations (see potential clinical limitations in the Supplemental Data).

Critically, image alteration by defacing should not impact usage in intended research purposes. To ensure this, we compared the outputs of standard processing pipelines between defaced and original (nondefaced) images. Statistical trends for AI-estimated whole brain and tumor volumes (brain tumor group), in addition to derived brain region volumes, global brain metrics, and AI-generated temporalis muscle measurements (control group), were unaffected by defacing. Most estimated measures were equivalent between defaced and original images, and any resulting measurement differences did not impact overall patterns at a group-level. Thus, there was minimal impact of defacing on the utility of the structural images for downstream analysis with standard research pipelines.

Many existing defacing tools are limited to T1W sequences,13,22,39 and we sought to expand support to additional structural image types (T2W, FLAIR, T1W-CE), given their prevalence in clinical and research practices. That said, our tool is limited to 4 sequences, and further development could expand to additional types such as functional MRI and other advanced imaging (eg, diffusion-weighted imaging). Although consensus review was used to assess defacing performance, additional quantitative metrics such as face recognition rate may provide a more objective measure of de-identification performance. Another limitation of this study is that, while the training data set included images across 6 institutions, a large portion of the data set came from a single institution (CHOP). Future work should focus on expanding to larger studies to bolster model generalizability, and would benefit from direct comparison between deep learning and existing computer-vision methods.

CONCLUSIONS

We developed an AI-powered pediatric defacing tool with the goal of facilitating wider de-identification of structural MRIs for data sharing purposes. The tool is publicly available (https://github.com/d3b-center/pediatric-auto-defacer-public) and can be used on multiple image types. Future work can extend the model to additional populations and MR sequences to provide a universal method to facilitate data sharing and ultimately drive discoveries in neuroscience research.

Footnotes

  • This project was supported in part from the National Institutes of Health (NIH) National Heart, Lung, and Blood Institute (NHLBI; grant number U2CHL156291/3U2CHL156291-02S1 to A.C.R.).

  • Disclosure forms provided by the authors are available with the full text and PDF of this article at www.ajnr.org.

References

  1. 1.↵
    1. Chen RJ,
    2. Wang JJ,
    3. Williamson DFK, et al
    . Algorithmic fairness in artificial intelligence for medicine and healthcare. Nat Biomed Eng 2023;7:719–42 doi:10.1038/s41551-023-01056-8 pmid:37380750
    CrossRefPubMed
  2. 2.↵
    1. Wilkinson MD,
    2. Dumontier M,
    3. Aalbersberg IJ, et al
    . The FAIR Guiding Principles for scientific data management and stewardship. Sci Data 2016;3:160018 doi:10.1038/sdata.2016.18 pmid:26978244
    CrossRefPubMed
  3. 3.↵
    1. Mueller SG,
    2. Weiner MW,
    3. Thal LJ, et al
    . Ways toward an early diagnosis in Alzheimer’s disease: the Alzheimer’s Disease Neuroimaging Initiative (ADNI). Alzheimers Dement 2005;1:55–66 doi:10.1016/j.jalz.2005.06.003 pmid:17476317
    CrossRefPubMed
  4. 4.↵
    1. Prior FW,
    2. Clark K,
    3. Commean P, et al
    . TCIA: an information resource to enable open science. In: 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE; 2013:1282–85 doi:10.1109/EMBC.2013.6609742 pmid:24109929
    CrossRefPubMed
  5. 5.↵
    1. Aberle DR,
    2. Adams AM,
    3. Berg CD, et al
    ; National Lung Screening Trial Research Team. Reduced lung-cancer mortality with low-dose computed tomographic screening. N Engl J Med 2011;365:395–409 doi:10.1056/NEJMoa1102873 pmid:21714641
    CrossRefPubMed
  6. 6.↵
    1. Buda M,
    2. Saha A,
    3. Walsh R, et al
    . A data set and deep learning algorithm for the detection of masses and architectural distortions in digital breast tomosynthesis images. JAMA Netw Open 2021;4:e2119100 doi:10.1001/jamanetworkopen.2021.19100 pmid:34398205
    CrossRefPubMed
  7. 7.↵
    1. Schwarz CG,
    2. Kremers WK,
    3. Therneau TM, et al
    . Identification of anonymous MRI research participants with face-recognition software. N Engl J Med 2019;381:1684–86 doi:10.1056/NEJMc1908881 pmid:31644852
    CrossRefPubMed
  8. 8.↵
    1. Mazura JC,
    2. Juluru K,
    3. Chen JJ, et al
    . Facial recognition software success rates for the identification of 3D surface reconstructed facial images: implications for patient privacy and security. J Digit Imaging 2012;25:347–51 doi:10.1007/s10278-011-9429-3 pmid:22065158
    CrossRefPubMed
  9. 9.↵
    1. Abramian D,
    2. Eklund A
    . Refacing: Reconstructing Anonymized Facial Features Using GANS. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). IEEE; 2019:1104–08 doi:10.1109/ISBI.2019.8759515
    CrossRef
  10. 10.↵
    1. Bischoff‐Grethe A,
    2. Ozyurt IB,
    3. Busa E, et al
    . A technique for the deidentification of structural brain MR images. Hum Brain Mapp 2007;28:892–903 doi:10.1002/hbm.20312 pmid:17295313
    CrossRefPubMed
  11. 11.↵
    1. Gulban OF,
    2. Nielson D,
    3. Poldrack R, et al
    . poldracklab/pydeface: v2. 0.0. Zenodo https://doi.org/105281/zenodo. 2019;3524401
  12. 12.↵
    1. Alfaro-Almagro F,
    2. Jenkinson M,
    3. Bangerter NK, et al
    . Image processing and quality control for the first 10,000 brain imaging datasets from UK biobank. NeuroImage 2018;166:400–24 doi:10.1016/j.neuroimage.2017.10.034 pmid:29079522
    CrossRefPubMed
  13. 13.↵
    1. Khazane A,
    2. Hoachuck J,
    3. Gorgolewski KJ, et al
    . DeepDefacer: automatic removal of facial features via U-Net image segmentation. arXiv.org 2022. http://arxiv.org/abs/2205.15536. Accessed January 26, 2024.
  14. 14.↵
    1. Milchenko M,
    2. Marcus D
    . Obscuring surface anatomy in volumetric imaging data. Neuroinform 2013;11:65–75 doi:10.1007/s12021-012-9160-3 pmid:22968671
    CrossRefPubMed
  15. 15.↵
    1. de Sitter A,
    2. Visser M,
    3. Brouwer I, et al
    ; MAGNIMS Study Group and Alzheimer’s Disease Neuroimaging Initiative. Facing privacy in neuroimaging: removing facial features degrades performance of image analysis methods. Eur Radiol 2020;30:1062–74 doi:10.1007/s00330-019-06459-3 pmid:31691120
    CrossRefPubMed
  16. 16.↵
    1. Rubbert C,
    2. Wolf L,
    3. Turowski B, et al
    ; Alzheimer’s Disease Neuroimaging Initiative. Impact of defacing on automated brain atrophy estimation. Insights Imaging 2022;13:54 doi:10.1186/s13244-022-01195-7 pmid:35348936
    CrossRefPubMed
  17. 17.↵
    1. Theyers AE,
    2. Zamyadi M,
    3. O’Reilly M, et al
    . Multisite comparison of MRI defacing software across multiple cohorts. Front Psychiatry 2021;12:617997 doi:10.3389/fpsyt.2021.617997 pmid:33716819
    CrossRefPubMed
  18. 18.↵
    1. Buimer EEL,
    2. Schnack HG,
    3. Caspi Y, et al
    ; Alzheimer’s Disease Neuroimaging Initiative. De-identification procedures for magnetic resonance images and the impact on structural brain measures at different ages. Hum Brain Mapp 2021;42:3643–55 doi:10.1002/hbm.25459 pmid:33973694
    CrossRefPubMed
  19. 19.↵
    1. Familiar AM,
    2. Kazerooni AF,
    3. Anderson H, et al
    . A multi-institutional pediatric data set of clinical radiology MRIs by the Children’s Brain Tumor Network. arXiv.org 2023. https://arxiv.org/abs/10.48550/arXiv.2310.01413. October 15, 2024
  20. 20.↵
    1. Schabdach JM,
    2. Schmitt JE,
    3. Sotardi S, et al
    . Brain growth charts of “clinical controls” for quantitative analysis of clinically acquired brain MRI. Radiology 2023;309:e230096
    CrossRefPubMed
  21. 21.↵
    1. Lilly JV,
    2. Rokita JL,
    3. Mason JL, et al
    . The Children’s Brain Tumor Network (CBTN) - Accelerating research in pediatric central nervous system tumors through collaboration and open science. Neoplasia 2023;35:100846 doi:10.1016/j.neo.2022.100846 pmid:36335802
    CrossRefPubMed
  22. 22.↵
    MiDeFace. Free Surfer Wiki. Accessed March 17, 2023. https://surfer.nmr.mgh.harvard.edu/fswiki/MiDeFace#Notes
  23. 23.↵
    1. Yushkevich PA,
    2. Pashchinskiy A,
    3. Oguz I, et al
    . User-guided segmentation of multi-modality medical imaging datasets with ITK-SNAP. Neuroinform 2019;17:83–102 doi:10.1007/s12021-018-9385-x pmid:29946897
    CrossRefPubMed
  24. 24.↵
    1. Zapaishchykova A,
    2. Liu KX,
    3. Saraf A, et al
    . Automated temporalis muscle quantification and growth charts for children through adulthood. Nat Commun 2023;14:6863 doi:10.1038/s41467-023-42501-1 pmid:37945573
    CrossRefPubMed
  25. 25.↵
    1. Isensee F,
    2. Jaeger PF,
    3. Kohl SAA, et al
    . nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods 2021;18:203–11 doi:10.1038/s41592-020-01008-z pmid:33288961
    CrossRefPubMed
  26. 26.↵
    1. Rohlfing T,
    2. Zahr NM,
    3. Sullivan EV, et al
    . The SRI24 multichannel atlas of normal adult human brain structure. Hum Brain Mapp 2010;31:798–819 doi:10.1002/hbm.20906 pmid:20017133
    CrossRefPubMed
  27. 27.↵
    1. Yushkevich PA,
    2. Pluta J,
    3. Wang H, et al
    . Fast automatic segmentation of hippocampal subfields and medial temporal lobe subregions in 3 Tesla and 7 Tesla T2‐weighted MRI. Alzheimers Dement 2016;12:P126–27
  28. 28.↵
    1. Crimi A,
    2. Bakas S
    1. Pati S,
    2. Singh A,
    3. Rathore S, et al
    . The Cancer Imaging Phenomics Toolkit (CaPTk): technical overview. In: Crimi A, Bakas S, eds. Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. Lecture Notes in Computer Science. Springer-Verlag International Publishing; 2020:380–94 doi:10.1007/978-3-030-46643-5_38 pmid:32754723
    CrossRefPubMed
  29. 29.↵
    1. Fathi Kazerooni A,
    2. Arif S,
    3. Madhogarhia R, et al
    . Automated tumor segmentation and brain tissue extraction from multiparametric MRI of pediatric brain tumors: a multi-institutional study. Neurooncol Adv 2023;5:vdad027 doi:10.1093/noajnl/vdad027 pmid:37051331
    CrossRefPubMed
  30. 30.↵
    1. Vossough A,
    2. Khalili N,
    3. Familiar AM, et al
    . Training and comparison of nnU-Net and DeepMedic methods for autosegmentation of pediatric brain tumors. AJNR Am J Neuroradiol 2024;45:1081–89 doi:10.3174/ajnr.A8293 pmid:38724204
    Abstract/FREE Full Text
  31. 31.↵
    1. Fischl B
    . FreeSurfer. Neuroimage 2012;62:774–81 doi:10.1016/j.neuroimage.2012.01.021 pmid:22248573
    CrossRefPubMed
  32. 32.↵
    1. Tejani AS,
    2. Klontzas ME,
    3. Gatti AA, et al
    ; CLAIM 2024 Update Panel. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): 2024 update. Radiol Artif Intell 2024;6:e240300 doi:10.1148/ryai.240300 pmid:38809149
    CrossRefPubMed
  33. 33.↵
    1. Mongan J,
    2. Moy L,
    3. Charles E Kahn J
    . Checklist for Artificial Intelligence in Medical Imaging (CLAIM): a Guide for Authors and Reviewers. Radiol Artif Intell 2020;2:e200029 doi:10.1148/ryai.2020200029 pmid:33937821
    CrossRefPubMed
  34. 34.↵
    1. Pham N,
    2. Hill V,
    3. Rauschecker A, et al
    . Critical appraisal of artificial intelligence–enabled imaging tools using the levels of evidence system. AJNR Am J Neuroradiol 2023;44:E21–28 doi:10.3174/ajnr.A7850 pmid:37080722
    Abstract/FREE Full Text
  35. 35.↵
    1. Lee B,
    2. Bae YJ,
    3. Jeong WJ, et al
    . Temporalis muscle thickness as an indicator of sarcopenia predicts progression-free survival in head and neck squamous cell carcinoma. Sci Rep 2021;11:19717 doi:10.1038/s41598-021-99201-3 pmid:34611230
    CrossRefPubMed
  36. 36.↵
    1. Cho J,
    2. Park M,
    3. Moon WJ, et al
    . Sarcopenia in patients with dementia: correlation of temporalis muscle thickness with appendicular muscle mass. Neurol Sci 2022;43:3089–95 doi:10.1007/s10072-021-05728-8 pmid:34846582
    CrossRefPubMed
  37. 37.↵
    1. Muglia R,
    2. Simonelli M,
    3. Pessina F, et al
    . Prognostic relevance of temporal muscle thickness as a marker of sarcopenia in patients with glioblastoma at diagnosis. Eur Radiol 2021;31:4079–86 doi:10.1007/s00330-020-07471-8 pmid:33201284
    CrossRefPubMed
  38. 38.↵
    1. Nozoe M,
    2. Kubo H,
    3. Kanai M, et al
    . Reliability and validity of measuring temporal muscle thickness as the evaluation of sarcopenia risk and the relationship with functional outcome in older patients with acute stroke. Clin Neurol Neurosurg 2021;201:106444 doi:10.1016/j.clineuro.2020.106444 pmid:33395619
    CrossRefPubMed
  39. 39.↵
    1. Schwarz CG,
    2. Kremers WK,
    3. Wiste HJ, et al
    ; Alzheimer’s Disease Neuroimaging Initiative. Changing the face of neuroimaging research: comparing a new MRI de-facing technique with popular alternatives. NeuroImage 2021;231:117845 doi:10.1016/j.neuroimage.2021.117845 pmid:33582276
    CrossRefPubMed
  • Received July 24, 2024.
  • Accepted after revision November 7, 2024.
  • © 2025 by American Journal of Neuroradiology
PreviousNext
Back to top

In this issue

American Journal of Neuroradiology: 46 (5)
American Journal of Neuroradiology
Vol. 46, Issue 5
1 May 2025
  • Table of Contents
  • Index by author
  • Complete Issue (PDF)
Advertisement
Print
Download PDF
Email Article

Thank you for your interest in spreading the word on American Journal of Neuroradiology.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Empowering Data Sharing in Neuroscience: A Deep Learning Deidentification Method for Pediatric Brain MRIs
(Your Name) has sent you a message from American Journal of Neuroradiology
(Your Name) thought you would like to see the American Journal of Neuroradiology web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Cite this article
Ariana M. Familiar, Neda Khalili, Nastaran Khalili, Cassidy Schuman, Evan Grove, Karthik Viswanathan, Jakob Seidlitz, Aaron Alexander-Bloch, Anna Zapaishchykova, Benjamin H. Kann, Arastoo Vossough, Phillip B. Storm, Adam C. Resnick, Anahita Fathi Kazerooni, Ali Nabavizadeh
Empowering Data Sharing in Neuroscience: A Deep Learning Deidentification Method for Pediatric Brain MRIs
American Journal of Neuroradiology May 2025, 46 (5) 964-972; DOI: 10.3174/ajnr.A8581

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
0 Responses
Respond to this article
Share
Bookmark this article
An AI De-identification Method for Pediatric MRIs
Ariana M. Familiar, Neda Khalili, Nastaran Khalili, Cassidy Schuman, Evan Grove, Karthik Viswanathan, Jakob Seidlitz, Aaron Alexander-Bloch, Anna Zapaishchykova, Benjamin H. Kann, Arastoo Vossough, Phillip B. Storm, Adam C. Resnick, Anahita Fathi Kazerooni, Ali Nabavizadeh
American Journal of Neuroradiology May 2025, 46 (5) 964-972; DOI: 10.3174/ajnr.A8581
del.icio.us logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
Purchase

Jump to section

  • Article
    • Abstract
    • ABBREVIATIONS:
    • MATERIALS AND METHODS
    • RESULTS
    • DISCUSSION
    • CONCLUSIONS
    • Footnotes
    • References
  • Figures & Data
  • Supplemental
  • Info & Metrics
  • Responses
  • References
  • PDF

Related Articles

  • PubMed
  • Google Scholar

Cited By...

  • No citing articles found.
  • Crossref
  • Google Scholar

This article has not yet been cited by articles in journals that are participating in Crossref Cited-by Linking.

More in this TOC Section

  • Improving Hematoma Expansion Prediction Robustness
  • AI-Enhanced Photon-Counting CT of Temporal Bone
  • DIRDL for Inflammatory Myelopathies
Show more Artificial Intelligence

Similar Articles

Advertisement

Indexed Content

  • Current Issue
  • Accepted Manuscripts
  • Article Preview
  • Past Issues
  • Editorials
  • Editor's Choice
  • Fellows' Journal Club
  • Letters to the Editor
  • Video Articles

Cases

  • Case Collection
  • Archive - Case of the Week
  • Archive - Case of the Month
  • Archive - Classic Case

More from AJNR

  • Trainee Corner
  • Imaging Protocols
  • MRI Safety Corner
  • Book Reviews

Multimedia

  • AJNR Podcasts
  • AJNR Scantastics

Resources

  • Turnaround Time
  • Submit a Manuscript
  • Submit a Video Article
  • Submit an eLetter to the Editor/Response
  • Manuscript Submission Guidelines
  • Statistical Tips
  • Fast Publishing of Accepted Manuscripts
  • Graphical Abstract Preparation
  • Imaging Protocol Submission
  • Evidence-Based Medicine Level Guide
  • Publishing Checklists
  • Author Policies
  • Become a Reviewer/Academy of Reviewers
  • News and Updates

About Us

  • About AJNR
  • Editorial Board
  • Editorial Board Alumni
  • Alerts
  • Permissions
  • Not an AJNR Subscriber? Join Now
  • Advertise with Us
  • Librarian Resources
  • Feedback
  • Terms and Conditions
  • AJNR Editorial Board Alumni

American Society of Neuroradiology

  • Not an ASNR Member? Join Now

© 2025 by the American Society of Neuroradiology All rights, including for text and data mining, AI training, and similar technologies, are reserved.
Print ISSN: 0195-6108 Online ISSN: 1936-959X

Powered by HighWire