Skip to main content
Advertisement

Main menu

  • Home
  • Content
    • Current Issue
    • Accepted Manuscripts
    • Article Preview
    • Past Issue Archive
    • Video Articles
    • AJNR Case Collection
    • Case of the Week Archive
    • Case of the Month Archive
    • Classic Case Archive
  • Special Collections
    • AJNR Awards
    • Low-Field MRI
    • Alzheimer Disease
    • ASNR Foundation Special Collection
    • Photon-Counting CT
    • View All
  • Multimedia
    • AJNR Podcasts
    • AJNR SCANtastic
    • Trainee Corner
    • MRI Safety Corner
    • Imaging Protocols
  • For Authors
    • Submit a Manuscript
    • Submit a Video Article
    • Submit an eLetter to the Editor/Response
    • Manuscript Submission Guidelines
    • Statistical Tips
    • Fast Publishing of Accepted Manuscripts
    • Graphical Abstract Preparation
    • Imaging Protocol Submission
    • Author Policies
  • About Us
    • About AJNR
    • Editorial Board
    • Editorial Board Alumni
  • More
    • Become a Reviewer/Academy of Reviewers
    • Subscribers
    • Permissions
    • Alerts
    • Feedback
    • Advertisers
    • ASNR Home

User menu

  • Alerts
  • Log in

Search

  • Advanced search
American Journal of Neuroradiology
American Journal of Neuroradiology

American Journal of Neuroradiology

ASHNR American Society of Functional Neuroradiology ASHNR American Society of Pediatric Neuroradiology ASSR
  • Alerts
  • Log in

Advanced Search

  • Home
  • Content
    • Current Issue
    • Accepted Manuscripts
    • Article Preview
    • Past Issue Archive
    • Video Articles
    • AJNR Case Collection
    • Case of the Week Archive
    • Case of the Month Archive
    • Classic Case Archive
  • Special Collections
    • AJNR Awards
    • Low-Field MRI
    • Alzheimer Disease
    • ASNR Foundation Special Collection
    • Photon-Counting CT
    • View All
  • Multimedia
    • AJNR Podcasts
    • AJNR SCANtastic
    • Trainee Corner
    • MRI Safety Corner
    • Imaging Protocols
  • For Authors
    • Submit a Manuscript
    • Submit a Video Article
    • Submit an eLetter to the Editor/Response
    • Manuscript Submission Guidelines
    • Statistical Tips
    • Fast Publishing of Accepted Manuscripts
    • Graphical Abstract Preparation
    • Imaging Protocol Submission
    • Author Policies
  • About Us
    • About AJNR
    • Editorial Board
    • Editorial Board Alumni
  • More
    • Become a Reviewer/Academy of Reviewers
    • Subscribers
    • Permissions
    • Alerts
    • Feedback
    • Advertisers
    • ASNR Home
  • Follow AJNR on Twitter
  • Visit AJNR on Facebook
  • Follow AJNR on Instagram
  • Join AJNR on LinkedIn
  • RSS Feeds

AJNR Awards, New Junior Editors, and more. Read the latest AJNR updates

Getting new auth cookie, if you see this message a lot, tell someone!
Research ArticleArtificial Intelligence

Deep Learning–Based ASPECTS Algorithm Enhances Reader Performance and Reduces Interpretation Time

Angela Ayobi, Adam Davis, Peter D. Chang, Daniel S. Chow, Kambiz Nael, Maxime Tassy, Sarah Quenet, Sylvain Fogola, Peter Shabe, David Fussell, Christophe Avare and Yasmina Chaibi
American Journal of Neuroradiology March 2025, 46 (3) 544-551; DOI: https://doi.org/10.3174/ajnr.A8491
Angela Ayobi
aFrom Avicenna.AI (A.A., M.T., S.Q., S.F., C.A., Y.C.), La Ciotat, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Angela Ayobi
Adam Davis
bAmalgamated Vision (A.D.), Brentwood, Tennessee
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Peter D. Chang
cDepartment of Radiological Sciences (P.D.C., D.S.C., D.F.), University of California Irvine, Orange, California
dCenter for Artificial Intelligence in Diagnostic Medicine (P.D.C., D.S.C.), University of California Irvine, Irvine, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Peter D. Chang
Daniel S. Chow
cDepartment of Radiological Sciences (P.D.C., D.S.C., D.F.), University of California Irvine, Orange, California
dCenter for Artificial Intelligence in Diagnostic Medicine (P.D.C., D.S.C.), University of California Irvine, Irvine, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Daniel S. Chow
Kambiz Nael
eDavid Geffen School of Medicine at UCLA (K.N.), Los Angeles, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Kambiz Nael
Maxime Tassy
aFrom Avicenna.AI (A.A., M.T., S.Q., S.F., C.A., Y.C.), La Ciotat, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sarah Quenet
aFrom Avicenna.AI (A.A., M.T., S.Q., S.F., C.A., Y.C.), La Ciotat, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sylvain Fogola
aFrom Avicenna.AI (A.A., M.T., S.Q., S.F., C.A., Y.C.), La Ciotat, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Peter Shabe
fAdvance Research Associates (P.S.), Santa Clara, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
David Fussell
cDepartment of Radiological Sciences (P.D.C., D.S.C., D.F.), University of California Irvine, Orange, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for David Fussell
Christophe Avare
aFrom Avicenna.AI (A.A., M.T., S.Q., S.F., C.A., Y.C.), La Ciotat, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Yasmina Chaibi
aFrom Avicenna.AI (A.A., M.T., S.Q., S.F., C.A., Y.C.), La Ciotat, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Yasmina Chaibi
  • Article
  • Figures & Data
  • Supplemental
  • Info & Metrics
  • Responses
  • References
  • PDF
Loading

Graphical Abstract

Figure
  • Download figure
  • Open in new tab
  • Download powerpoint

Abstract

BACKGROUND AND PURPOSE: ASPECTS is a long-standing and well-documented selection criterion for acute ischemic stroke treatment; however, the interpretation of ASPECTS is a challenging and time-consuming task for physicians with notable interobserver variabilities. We conducted a multireader, multicase study in which readers assessed ASPECTS without and with the support of a deep learning (DL)-based algorithm to analyze the impact of the software on clinicians’ performance and interpretation time.

MATERIALS AND METHODS: A total of 200 NCCT scans from 5 clinical sites (27 scanner models, 4 different vendors) were retrospectively collected. The reference standard was established through the consensus of 3 expert neuroradiologists who had access to baseline CTA and CTP data. Subsequently, 8 additional clinicians (4 typical ASPECTS readers and 4 senior neuroradiologists) analyzed the NCCT scans without and with the assistance of CINA-ASPECTS (Avicenna.AI), a DL-based, FDA-cleared, and CE-marked algorithm designed to compute ASPECTS automatically. Differences were evaluated in both performance and interpretation time between the assisted and unassisted assessments.

RESULTS: With software aid, readers demonstrated increased region-based accuracy from 72.4% to 76.5% (P < .05) and increased receiver operating characteristic area under the curve (ROC AUC) from 0.749 to 0.788 (P < .05). Notably, all readers exhibited an improved ROC AUC when utilizing the software. Moreover, the use of the algorithm improved the score-based interobserver reliability and correlation coefficient of ASPECTS evaluation by 0.222 and 0.087 (P < .0001), respectively. Additionally, the readers’ mean time spent analyzing a case was significantly reduced by 6% (P < .05) when aided by the algorithm.

CONCLUSIONS: With the assistance of the algorithm, readers’ analyses were not only more accurate but also faster. Additionally, the overall ASPECTS evaluation exhibited greater consistency, fewer variabilities, and higher precision compared with the reference standard. This novel tool has the potential to enhance patient selection for appropriate treatment by enabling physicians to deliver accurate and timely diagnoses of acute ischemic stroke.

ABBREVIATIONS:

AI
artificial intelligence
DL
deep learning
EIC
early ischemic changes
ICC
intraclass correlation coefficient
IS
ischemic stroke
ROC AUC
receiver operating characteristic area under the curve
SD
standard deviation

SUMMARY

PREVIOUS LITERATURE:

Stroke remains the second leading cause of death globally, with IS being the most common type. ASPECTS helps quantify early ischemic changes on noncontrast CT scans and guides treatment decisions, especially for endovascular therapy. However, ASPECTS interpretation is challenging, with variability depending on experience and other factors. Recent machine learning algorithms aim to improve ASPECTS accuracy and speed, but their clinical impact remains understudied. This multireader, multicase study evaluates the effect of a deep learning–based ASPECTS tool on clinicians’ accuracy and interpretation time in near to real-world settings.

KEY FINDINGS:

A total of 200 stroke cases were analyzed. AI-assisted interpretation of ASPECTS improved readers’ accuracy by 4.1%, interrater reliability (intraclass correlation coefficient of 0.689 versus 0.467), and correlation with the reference standard. AI also reduced interpretation time by 6% (6.7 seconds) and helped accurately guide treatment decisions in 13% of cases.

KNOWLEDGE ADVANCEMENT:

Unlike previous studies, the current results are consistent across various readers, ASPECTS regions, and data subcategories, highlighting the tool’s potential to standardize stroke assessments and accelerate clinical decision-making. When taking into account all the findings together, software assistance has the potential to provide better diagnoses and improve patient outcomes.

Despite significant improvements in primary prevention and treatment in recent decades, stroke remains the second leading cause of death worldwide.1 Ischemic stroke (IS) is the most frequent type of stroke, accounting for 62.4% of all stroke cases worldwide in 2019.2 Ischemic infarcts most commonly arise from occlusion of the proximal large arterial vasculature including the MCA and/or ICA, which account for 10%–46% of all acute IS.3 Small vessel occlusions or lacunar strokes account for 20%–25% of IS cases.3,4 Recent projections suggest that the incidence of IS will continue to increase between 2020 and 2030.5

To improve stroke imaging triage and guide treatment, the ASPECTS was created as a semiquantitative visual grading system to estimate the extent and distribution of early ischemic changes (EIC) on NCCT.6,7 To calculate ASPECTS, the MCA vascular territory is divided into 10 regions and 1 point is subtracted for each region where parenchymal hypoattenuation reflecting EIC is present.7,8 ASPECTS is commonly used for patient selection for endovascular treatment. Patients with ASPECTS ≥6 are prioritized for thrombectomy treatment because it is associated with better patient outcomes and reduced risk of hemorrhagic conversion.9,10

ASPECTS is now a well-documented and widely accepted patient selection criteria for mechanical thrombectomy and an accurate predictor of long-term functional outcomes.11 However, the interpretation of ASPECTS remains challenging and time-consuming, even for stroke experts.12 Intra- and interobserver variability vary greatly with experience, level of training, knowledge of stroke symptoms, and time between onset and imaging.13⇓⇓-16 Poor image quality, motion artifacts, or head tilt may also cause errors.17 Previous studies have shown that clinical experts tend to evaluate ASPECTS with high specificity (between 80.9%–99.0%) but low or moderate sensitivity (between 10.2%–75.0%), leading to overall mixed accuracy performance.14,18⇓⇓-21

Recently, machine learning algorithms have been developed to assist clinicians in the analysis of ASPECTS to provide a more accurate and faster diagnosis. Because of their recent commercialization, there is little literature evaluating the impact of automated ASPECTS algorithms on patients’ clinical outcomes.22 Nevertheless, diverse studies evaluating other machine learning algorithms applied to IS, such as large vessel occlusion detection, demonstrated their positive impact on patient outcomes.23⇓-25 Conversely, several diagnostic studies evaluated the stand-alone performances of ASPECTS algorithms, with accuracies ranging between 66.0%–96%.18⇓⇓-21,26⇓⇓-29 However, few multireader studies assessed the effect of algorithm usage on clinical interpretation accuracy and interobserver agreement.30⇓⇓-33 Furthermore, the effect of machine learning algorithms on interpretation time remains poorly understood. This evaluation is critical as imaging interpretation time and speed of triage in the context of stroke is of utmost importance.8 Given these limitations, we conducted a multireader, multicase study in which readers graded ASPECTS without and with the assistance of a CE-marked and FDA-cleared automated algorithm, with the overall primary objective of evaluating the effect of algorithm usage on clinicians’ accuracy, interobserver variabilities, and mean interpretation time per NCCT scan. We aimed to reproduce realistic clinical routine settings and compare clinicians’ ASPECTS assessments with and without the assistance of deep learning (DL)-based software on real-world clinical data.

MATERIALS AND METHODS

Data Collection

We retrospectively collected images from acute IS code patients with suspected MCA and/or ICA occlusion from 5 different external clinical sources (4 from the United States acquired between June 2018 and June 2022, and 1 from France acquired between January 2020 and December 2022). A waiver of consent was obtained from the Western Institutional Review Board for all cases. Informed consent for participation was not required for this study in accordance with the national legislation and institutional requirements. Inclusion criteria were patients more than 21 years old who underwent baseline NCCT, CTA, and CTP for acute stroke diagnosis. Time between baseline NCCT and CTP was required to be less than 1 hour, and time from stroke onset/last known well to baseline CT scan (NCCT and CTA) was less than 12 hours. Either an ICA and/or MCA occlusion was visually confirmed on source CTA images for all included cases by the US board-certified expert neuroradiologists who established the reference standard. In fact, ASPECTS is calculated based on the MCA regions, but nowadays, several authors also use ASPECTS to quantify the extent of ICA occlusions.34⇓-36 Moreover, patients with diffuse parenchymal abnormalities precluding evaluation of ASPECTS were excluded from analysis by the experts (eg, intracranial hemorrhage and/or bilateral IS, large craniotomy with brain herniation, severe image artifacts impeding the CT interpretation). Images were acquired on 27 different scanner models (11 from GE Healthcare, 3 from Philips Healthcare, 10 from Siemens Healthineers, and 3 from Canon Medical Systems) with a slice thickness ≤2.5 mm.

Reference Standard

Two US board-certified expert neuroradiologists with more than 7- and 28-year-experience proceeded with the visual assessment to determine the infarcted ASPECTS regions on the NCCT series to establish the reference standard. Baseline CTA and CTP were additionally provided to assist experts in the ASPECTS analysis. In cases of discrepancy between the 2 primary expert reviewers, a third expert US board-certified neuroradiologist with 10 years of experience was recruited to establish the reference standard by majority agreement.

First, the experts confirmed the presence of a unilateral occlusion within MCA and/or ICA based on the CTA. Second, the laterality of the infarct was determined (left or right brain hemisphere). Finally, the presence or absence of EIC in each of the 10 ASPECTS regions was defined within the infarcted hemisphere for ASPECTS characterization. Baseline NCCT series, in conjunction with the CTP hemodynamic maps, were used by the experts to decide on the presence or absence of EIC in each region. In fact, CTP can be more sensitive for identifying lesions than NCCT; however, there can also be discrepancies between both images due to the evolution and growth of infarct core between the time of NCCT and CTP. Hence, for each case, the time between baseline NCCT and CTP acquisitions was provided to the expert to guide their decisions.

DL-Based Tool

The impact of CINA-ASPECTS (Avicenna.AI) on physicians’ interpretations was evaluated in this study. The algorithm is implemented as a series of convolutional neural network DL models. The application is composed of a hybrid 3D/2D UNet network with a regression loss function and a 4-stage 2D UNet network for anatomic localization.31,38 First, 3D reorientation and tilt correction are applied to create a uniform standardized field-of-view. Next, a landmark-based DL nonlinear registration algorithm is used to identify the ASPECTS regions within the brain. Finally, a separate DL model estimates the degree of EIC as a probability map throughout the brain regions. Based on the outputs of the previous models, a final algorithm calculates a composite ASPECT score.

Visually, the trained algorithm is designed to produce a heat map that may be overlaid on the NCCT series with the 20 ASPECTS regions outlined in either green (negative for EIC) or red (positive for EIC). The algorithm also displays a table summarizing the average Hounsfield units in each ASPECTS region within the areas of infarct and the final ASPECT score (Fig 1). The algorithm interface allows the user to modify results and requires expert confirmation before archiving in PACS.

FIG 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 1.

Example of the DL-based algorithm outputs. In this case, the user confirmed an occlusion on the right side, and the algorithm detected IC, L, I, M2–M6 regions with EIC, leading to an ASPECT score of 2.

For the training phase, 1575 patient examinations were used to develop the 3D reorientation and tilt correction algorithm. Next, 522 patient examinations were used to develop both the landmark-based DL algorithm to identify ASPECTS regions within the brain and the algorithm for estimating ischemic change probability. Training data were aggregated from several US clinical centers distributed across all major scanner vendors (Canon Medical Systems, GE Healthcare, Philips Healthcare, Siemens Healthineers), patient age, slice thickness, and kVp. The testing phase was performed on 139 patient examinations, yielding a region-based sensitivity of 76.6% (95% CI: 72.4%–81.1%), specificity of 88.7% (95% CI: 87.4%–89.9%), and receiver operating characteristic area under the curve (ROC AUC) of 0.826 (P < .0001).39 Further detailed technical information about design, training, and testing of this commercially available algorithm is not disclosed publicly.

Multireader, Multicase Study

A retrospective, concurrent, crossover, fully-crossed, multireader, multicase study (level 4 of DL evidence) was conducted to evaluate the impact of the DL-based tool on readers’ assessments with respect to 3 objectives:

  • The readers’ region-based accuracy and ROC AUC against the reference standard,

  • The score-based interobserver variability and linear correlation with the reference standard,

  • The interpretation time per NCCT scan.

Eight additional readers, different from the ones who established the reference standard, were involved in the multireader, multicase study. Four readers are typical readers who see stroke patients regularly in their practice (reader 1 is a neurointensivist with 12 years of experience, reader 2 is a vascular neurologist with 8 years of experience, reader 3 is a stroke neurologist with 5 years of experience, and reader 4 is a general radiologist with 13 years of experience) and 4 readers are expert senior neuroradiologists with 6, 8, 9, and 12 years of experience, respectively (readers 5–8).

The readers analyzed each NCCT scan twice, once without CINA-ASPECTS (unaided arm) and once with the aid of CINA-ASPECTS (aided arm). In the first reading session, one-half of the NCCTs were randomly selected for analysis without the software, while the remaining one-half were selected for analysis with the software. In the second reading session, which occurred after a 4-week washout designed to limit potential recall bias, each reader reviewed the same images but with software usage (unaided versus aided) reversed. All images were presented in random order during both sessions for each reader. After completion of both sessions, all readers analyzed each case twice in a random order, both with and without software assistance.

For each case, the infarcted side previously defined by the reference standard was communicated to the readers. Thus, during each session, readers were asked to grade the presence or absence of EIC in each of the 10 ASPECTS regions only within the infarcted hemisphere by using the following 6-point ordinal scale: definitely not infarcted, 1; probably not infarcted, 2; possibly not infarcted, 3; possibly infarcted, 4; probably infarcted, 5; and definitely infarcted, 6. In addition, the time from initial scan review to final ASPECTS diagnosis (interpretation time) was recorded for each case across all readers during both sessions.

Statistical Analysis

An initial sample size calculation was carried out with nQuery (v8.7.2.0, Dotmatics) by using a 1-way repeated measures ANOVA. To obtain a statistically significant difference between each aided and unaided accuracy, at least 200 matched pairs were determined to be needed, assuming a statistical significance of α = .05, power of 1 – β = 0.80, standard deviation (SD) of 25%, and r = 0.50.

Interrater agreement for the first 2 experts who established the reference standard was calculated by using Cohen κ.40 For the reader study evaluation, first, we calculated the impact of the algorithm on readers’ region-based accuracy (aided versus unaided) based on the percent of ASPECTS regions matching the reference standard. For this analysis, a threshold of >3 was used to binarize reader scores as positive for EIC (possibly infarcted, 4; probably infarcted, 5; and definitely infarcted, 6 were considered as positive assessments). Furthermore, to account for the potential dependence between ASPECTS regions within the same case (10 regions within a brain hemisphere are potentially correlated), bootstrap methodology was used to estimate 95% CI.41 Statistically significant differences between aided and unaided accuracy were evaluated with the McNemar test for difference of paired proportions.

Second, a region-based ROC AUC analysis was performed by using the readers’ 6-point ordinal rating scale following the Obuchowski-Rockette ANOVA method for factorial study design combined with bootstrapping methodology for covariance estimation.42,43 The analysis was conducted by using the MRMCaov R package.44

Next, for score-based analysis, the intraclass correlation coefficient (ICC) between readers was computed for both arms. The absolute agreement was assessed by using a 2-way mixed-effects model to test interrater reliability.45 Furthermore, a regression analysis comparing the readers’ ASPECT score for both arms versus the reference standard was performed, and the Pearson correlation coefficient was calculated. In addition, a dichotomized analysis using the endovascular selection cutoff point of ASPECTS ≥6 was performed. The ASPECTS values attributed by the readers and the reference standard were dichotomized in scores ≥6, and Cohen κ was calculated for both arms. We also evaluated the percentage of cases in which the cutoff point classification improved with software assistance. A paired test for comparison of correlation coefficients was used to calculate statistical significance.

Finally, differences in average interpretation time per NCCT scan (aided versus unaided) were evaluated by using a mixed-effects repeated measures model. These analyses were conducted with MedCalc (v20.015, MedCalc Software).

RESULTS

A total of 226 cases met the inclusion criteria; 149 cases were provided by 4 different US clinical sources and 77 by 1 French clinical source. After initial review by the US board-certified expert neuroradiologists who established the reference standard, 26 cases were excluded because of presence of intracranial hemorrhage (n = 1), absence of ICA or MCA occlusion (n = 24), and image artifact degradation (n = 1). Finally, 200 cases (133 from the US and 67 from France) were included for analysis spanning 2000 ASPECTS regions. The final cohort demonstrated a mean age of 70.2 ± 14.6 [SD] years old, a 44.5% distribution of women, and a mean time from stroke onset to NCCT of 3.9 ± 2.9 [SD] hours. There were no missing data.

The first US board-certified neuroradiologist assessed 791/2000 regions as being positive, whereas the second US board-certified neuroradiologist defined 725/2000 regions as being positive. Disagreements were observed between both operators for 328/2000 (16.4%) regions, yielding a moderate interrater agreement of 0.65 [95% CI: 0.62–0.69] according to Cohen κ.40 After consensus, median ASPECTS was 6 and 39.6% of regions were defined with EIC.

Region-Based Analysis

The overall readers’ region-based accuracy in the aided arm was higher than in the unaided arm: 76.5% (95% CI: 75.8%–77.1%) versus 72.4% (95% CI: 71.6%–73.0%). The difference between the aided and unaided arm was 4.1% (95% CI: 3.3%–4.9%) and statistically significant (P < .05). Stand-alone software accuracy was 76.1% (95% CI: 74.3%–78.0%). Additional subgroup analyses based on ASPECTS grouped regions, readers’ expertise, and scanner manufacturers are shown in Table 1. One example of a typical reader’s aided and unaided assessment is shown in Fig 2.

FIG 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 2.

Example of a typical reader’s aided and unaided assessment. A and B, Raw NCCT images. C and D, AI-based outputs. Expert consensus defined an ASPECTS of 4 with EIC present in the insula and M2–M6. The software detected an ASPECT score of 5 with EIC suspected in the M2–M6. Initially, without software assistance, the reader identifies EIC within M4 and M5 regions (ASPECTS of 8), leading to an accuracy of 60%. When assisted by software, the reader identifies EIC within the insula, M2, M5, M5 and M6 (ASPECTS of 5), leading to an accuracy of 90%.

View this table:
  • View inline
  • View popup
Table 1:

Mean readers’ aided and unaided accuracy (95% CI) for each subgroup. Sample size (n) is specified for each category

Comparison of the overall ROC AUC with and without the support of the artificial intelligence (AI) tool yielded a statistically significant improvement of 0.039 (95% CI: 0.019–0.059, P < .05), from 0.749 (95% CI: 0.712–0.785) in the unaided arm to 0.788 (95% CI: 0.762–0.814) in the aided arm. Stand-alone software ROC AUC was 0.751 (95% CI: 0.728–0.773). Analysis stratified by an individual reader showed an increase in the ROC AUC for all users when assisted by software, as shown in Fig 3.

FIG 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 3.

Per reader ROC AUC for the aided (blue line) and unaided (red line) arm. Readers 1–4 are typical readers and readers 5–8 are senior neuroradiologists.

Score-Based Analysis

Score-based analyses focus only on global ASPECT score assessments without considering the specific ASPECTS regions with EIC. ICC was used to measure interreader agreement across individual ASPECT scores. A poor ICC (0.467) was observed in the unaided arm, whereas a moderate ICC (0.689) was observed in the aided arm (P value < .0001), suggesting that interrater reliability was significantly improved when using the AI tool.

In addition, the Pearson correlation coefficient was computed to evaluate the linear relationship between the readers’ ASPECT score and the reference standard. The coefficient in the aided arm (r = 0.674) was statistically higher than in the unaided arm (r = 0.587, P value < .0001), indicating that scores were significantly more correlated with the reference standard when assisted by the device. All the results are shown in Table 2.

View this table:
  • View inline
  • View popup
Table 2:

Score-based analyses (95% CI) for unaided and aided arms

Regarding the dichotomized analysis, Cohen κ was computed for each arm by dichotomizing scores according to ASPECTS ≥6. For the unaided arm, the value was 0.476, whereas for the aided arm the value was 0.523 (P value = .0766). On average, for 13% (26/200) of cases, the software helped readers to distinguish accurately between 0–5 scores and 6–10 scores. In other words, with software assistance, the readers were able to make an accurate decision on whether an endovascular treatment should be initiated or not for 26 patients, whereas without software assistance, the indicated treatment decision was incorrect. Among the 26 cases, 10 had an ASPECTS between 0 and 5, and 16 cases had an ASPECTS ≥6.

Interpretation Time Analysis

The mean interpretation time per case among all readers was significantly reduced when assisted by software. In the aided arm the mean was 108.5 seconds (95% CI: 103.3–114.8), whereas in the unaided arm, the mean was 115.2 seconds (95% CI: 110.7–118.7). This led to a statistically significant difference of −6.7 seconds (95% CI: −13.2–0.1, P < .05), representing a 6% reduction in interpretation time.

DISCUSSION

This study offers a thorough characterization of the effect of a DL-based tool in ASPECTS assessments conducted by stroke clinicians on external data. The results demonstrate a significant enhancement in both region-based and score-based performance, alongside reductions in per-case interpretation time. Conducted across multiple centers, scanners, and countries, this reader study benefits from a diverse data set encompassing various imaging parameters and patient profiles. Moreover, it engaged a panel of readers representing many of the subspecialties and expertise levels encountered in real-world stroke practice. Therefore, these results suggest robust generalization across a wide spectrum of real-world scenarios.

When assisted by the software, the region-based performance of all readers was significantly increased by an accuracy of 4.1% and ROC AUC of 0.039. Notably, the improvement in accuracy was statistically significant among all subgroups, indicating that the algorithm provides meaningful assistance regardless of the anatomic location of EIC or scanner used for the CT acquisition. Furthermore, despite variation in expertise and subspeciality, all 8 readers exhibited an increase in ROC AUC, demonstrating that the DL-based tool yields improved performance across various clinical experts. Importantly, the software’s user interface did not distract any reader; these latter affirmed that their user experience was seamless and intuitive.

A previous reader study evaluating a different automated tool for ASPECTS reported an overall improvement in accuracy of 4.2% for a panel of 8 readers and 50 CT scans; however, the performance of 2 expert readers was not improved when assisted by the software.31 Another study with 16 readers evaluating a different tool reported an overall improvement of 5.1% on 202 CT scans, but demonstrated no global improvement in 2 of the cortical ASPECTS regions.30 Similarly, other authors observed a statistically significant difference in ROC AUC of 0.02 but a very small and not significant difference in accuracy (1%) for a cohort of 54 cases assessed by 10 readers.32 By contrast, our results demonstrate a consistent improvement in performance among all types of readers, ASPECTS regions, and several data subcategories. Moreover, a 4.2% increase in performance corresponds to almost one-half an ASPECTS region per scan, which might potentially shift endovascular treatment choice toward the right decision.

From a clinical decision-making perspective, the final global ASPECT score may be more relevant than the exact anatomic distribution of EIC, as the overall composite score is used for patient selection and therapeutic triage. Indeed, high intra- and interobserver variation has been reported for conventional ASPECTS assessments, indicating that reproducibility and repeatability remain challenging for clinicians.13⇓⇓-16 In this study, both the score-based ICC and Pearson correlation coefficient were statistically higher in the aided arm, indicating that readers agreed more with each other (better interrater reliability) and that ASPECTS evaluation more closely aligned with expert consensus when assisted by the software. A similar result for increased ICC from software assistance was presented by Brinjinki et al,30 who reported an increase from fair (0.48) to good agreement (0.68, P value < .01) with aided interpretation. Regarding the readers’ ASPECT scores correlation with the reference standard, a similar study observed a significant increase in the weighted κ for 3 out of 5 readers.33 Indeed, since ASPECTS presents several interrater discrepancies, all these results suggest that an improvement in both readers’ accuracy and interrater agreement may enhance clinicians’ confidence, leading to more consistent and reliable decisions.

On the other hand, clinical decision-making by using the endovascular cutoff point was also improved in the aided arm. Even if the difference in Cohen κ was not statistically significant, there were still 13% of patients who could have benefited from a better treatment decision with software assistance. Notably, more than one-half of these patients (62%) had ASPECTS ≥6, indicating that without the help of the software, they might have been inappropriately prioritized for thrombectomy treatment. Similar results were observed by Lambert et al,33 who did identify an improvement in Cohen κ values for readers’ dichotomized analyses in the aided arm, but the difference was not statistically significant. Hence, our results suggest that automated tools have the ability to reduce discrepancies, improve reliability, and yield more objective criteria for patient selection.

In the context of IS, the notion that “time is brain” arises from the fact that any delay in intervention yields serious clinical consequences.46 For every second of untreated acute stroke, a patient loses nearly 32,000 neurons, 230 million synapses, 200 m of myelinated fibers, and the equivalent of 1.7 hours of healthy life.46⇓-48 Prompt treatment is, therefore, essential for good patient outcomes. This study demonstrates that the adjunctive use of the software leads to a statistically significant reduction in interpretation time. Even though 7 seconds are relatively short compared with the total door-in door-out cycle based on “time is brain” quantification, the current estimated gain of time may provide an additional one-half of a day (11.4 hours) of healthy life. To the best of our knowledge, this is the first study analyzing the direct impact of software assistance on the speed of ASPECTS evaluation. Since acute IS treatment is highly time-sensitive, even this small improvement in interpretation time can enhance the overall efficiency of the clinical workflow, allowing clinicians to manage more patients effectively, particularly in busy stroke centers.

Our study presents several limitations. First, all cases were collected retrospectively, which led to a potential bias. Second, though this study utilized CTP and CTA to increase the objectivity of expert consensus, a more accurate reference standard may be obtained by DWI MRI acquired immediately after NCCT, which, because of the infrequency of its utilization, would have severely limited the inclusion criteria for data selection. Third, although a total of 8 readers participated in this study, inclusion of a larger reader cohort would help to improve the generalizability of findings. Finally, the study tried to reproduce as much as possible the clinical conditions of ASPECTS assessments; however, future prospective studies are needed to confirm the downstream impact on patient outcomes.

CONCLUSIONS

This study demonstrated that readers’ analyses were not only more accurate but faster with the help of the algorithm. Furthermore, software score-based assisted interpretation yielded overall increased interreader consistency with less individual variability and improved correlation with expert consensus. When taking into account all these findings together, software assistance has the potential to provide better diagnoses and improve patient outcomes. Importantly, the value of this technology lies not only in its ability to compute ASPECTS accurately but also to empower users to interpret the ASPECTS regions heat map, analyze results, and determine the final ASPECT score based on their own clinical judgment and expertise. Indeed, multidisciplinary neuroradiologic and neurologic expertise will always be required for IS diagnosis; however, DL-based algorithms may facilitate decision-making, early treatment, and ultimately improved patient outcomes.

Acknowledgments

The authors thank all the 8 readers who participated in the study and the clinical centers that provided the data. They also thank Laurent Turek for his assistance with data processing.

Footnotes

  • Disclosure forms provided by the authors are available with the full text and PDF of this article at www.ajnr.org.

References

  1. 1.↵
    1. Donkor ES
    . Stroke in the 21 st Century: A Snapshot of the Burden, Epidemiology, and Quality of Life. Stroke Res Treat 2018;2018:1–10. doi:10.1155/2018/3238165 pmid:30598741
    CrossRefPubMed
  2. 2.↵
    1. Feigin VL,
    2. Stark BA,
    3. Johnson CO, et al
    . Global, regional, and national burden of stroke and its risk factors, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019. Lancet Neurol 2021;20:795–820. doi:10.1016/S1474-4422(21)00252-0
    CrossRefPubMed
  3. 3.↵
    1. Saini V,
    2. Guada L,
    3. Yavagal DR
    . Global Epidemiology of Stroke and Access to Acute Ischemic Stroke Interventions. Neurology 2021;97:S6–16. doi:10.1212/WNL.0000000000012781 pmid:34785599
    Abstract/FREE Full Text
  4. 4.↵
    1. Hasan TF,
    2. Hasan H,
    3. Kelley RE
    . Overview of Acute Ischemic Stroke Evaluation and Management. Biomedicines 2021;9:1486. doi:10.3390/biomedicines9101486 pmid:34680603
    CrossRefPubMed
  5. 5.↵
    1. Pu L,
    2. Wang L,
    3. Zhang R, et al
    . Projected Global Trends in Ischemic Stroke Incidence, Deaths and Disability-Adjusted Life Years From 2020 to 2030. Stroke 2023;54:1330–9. doi:10.1161/STROKEAHA.122.040073 pmid:37094034
    CrossRefPubMed
  6. 6.↵
    1. Dhand S,
    2. O’Connor P,
    3. Hughes C, et al
    . Acute Ischemic Stroke: Acute Management and Selection for Endovascular Therapy. Semin Interv Radiol 2020;37:109–18. doi:10.1055/s-0040-1709152 pmid:32419723
    CrossRefPubMed
  7. 7.↵
    1. Barber PA,
    2. Demchuk AM,
    3. Zhang J, et al
    . Validity and reliability of a quantitative computed tomography score in predicting outcome of hyperacute stroke before thrombolytic therapy. The Lancet 2000;355:1670–4. doi:10.1016/s0140-6736(00)02237-6 pmid:10905241
    CrossRefPubMed
  8. 8.↵
    1. Puig J,
    2. Shankar J,
    3. Liebeskind D, et al
    . From “Time is Brain” to “Imaging is Brain”: A Paradigm Shift in the Management of Acute Ischemic Stroke. J Neuroimaging 2020;30:562–71. doi:10.1111/jon.12693 pmid:32037629
    CrossRefPubMed
  9. 9.↵
    1. Lei C,
    2. Zhou X,
    3. Chang X, et al
    . Mechanical Thrombectomy in Patients with Acute Ischemic Stroke and ASPECTS ≤5. J Stroke Cerebrovasc Dis 2021;30:105748. doi:10.1016/j.jstrokecerebrovasdis.2021.105748 pmid:33784521
    CrossRefPubMed
  10. 10.↵
    1. Hao Y,
    2. Yang D,
    3. Wang H, et al
    . Predictors for Symptomatic Intracranial Hemorrhage After Endovascular Treatment of Acute Ischemic Stroke. Stroke 2017;48:1203–9. doi:10.1161/STROKEAHA.116.016368 pmid:28373302
    Abstract/FREE Full Text
  11. 11.↵
    1. Naylor J,
    2. Churilov L,
    3. Chen Z, et al
    . Reliability, Reproducibility and Prognostic Accuracy of the Alberta Stroke Program Early CT Score on CT Perfusion and Non-Contrast CT in Hyperacute Stroke. Cerebrovasc Dis 2017;44:195–202. doi:10.1159/000479707 pmid:28810259
    CrossRefPubMed
  12. 12.↵
    1. Menon BK,
    2. Puetz V,
    3. Kochar P, et al
    . ASPECTS and Other Neuroimaging Scores in the Triage and Prediction of Outcome in Acute Stroke Patients. Neuroimaging Clin N Am 2011;21:407–23. doi:10.1016/j.nic.2011.01.007 pmid:21640307
    CrossRefPubMed
  13. 13.↵
    1. Gupta AC,
    2. Schaefer PW,
    3. Chaudhry ZA, et al
    . Interobserver Reliability of Baseline Noncontrast CT Alberta Stroke Program Early CT Score for Intra-Arterial Stroke Treatment Selection. Am J Neuroradiol 2012;33:1046–9. doi:10.3174/ajnr.A2942 pmid:22322602
    Abstract/FREE Full Text
  14. 14.↵
    1. Huo X,
    2. Raynald,
    3. Jin H, et al
    . Performance of automated CT ASPECTS in comparison to physicians at different levels on evaluating acute ischemic stroke at a single institution in China. Chin Neurosurg J 2021;7:40.
    CrossRefPubMed
  15. 15.↵
    1. Farzin B,
    2. Fahed R,
    3. Guilbert F, et al
    . Early CT changes in patients admitted for thrombectomy: Intrarater and interrater agreement. Neurology 2016;87:249–56. doi:10.1212/WNL.0000000000002860 pmid:27316243
    CrossRefPubMed
  16. 16.↵
    1. Nicholson P,
    2. Hilditch CA,
    3. Neuhaus A, et al
    . Per-region interobserver agreement of Alberta Stroke Program Early CT Scores (ASPECTS). J NeuroInterventional Surg 2020;12:1069–71. doi:10.1136/neurintsurg-2019-015473 pmid:32024784
    Abstract/FREE Full Text
  17. 17.↵
    1. Radhiana H
    . Non-contrast Computed Tomography in Acute Ischaemic Stroke: A Pictorial Review. 2013;68:8.
  18. 18.↵
    1. Herweh C,
    2. Ringleb PA,
    3. Rauch G, et al
    . Performance of e-ASPECTS software in comparison to that of stroke physicians on assessing CT scans of acute ischemic stroke patients. Int J Stroke 2016;11:438–45. doi:10.1177/1747493016632244 pmid:26880058
    CrossRefPubMed
  19. 19.↵
    1. Nagel S,
    2. Sinha D,
    3. Day D, et al
    . e-ASPECTS software is non-inferior to neuroradiologists in applying the ASPECT score to computed tomography scans of acute ischemic stroke patients. Int J Stroke 2017;12:615–22. doi:10.1177/1747493016681020 pmid:27899743
    CrossRefPubMed
  20. 20.↵
    1. Cimflova P,
    2. Volny O,
    3. Mikulik ProfR, et al
    . Detection of ischemic changes on baseline multimodal computed tomography: expert reading vs. Brainomix and RAPID software. J Stroke Cerebrovasc Dis 2020;29:104978. doi:10.1016/j.jstrokecerebrovasdis.2020.104978 pmid:32807415
    CrossRefPubMed
  21. 21.↵
    1. Naganuma M,
    2. Tachibana A,
    3. Fuchigami T, et al
    . Alberta Stroke Program Early CT Score Calculation Using the Deep Learning-Based Brain Hemisphere Comparison Algorithm. J Stroke Cerebrovasc Dis 2021;30:105791. doi:10.1016/j.jstrokecerebrovasdis.2021.105791 pmid:33878549
    CrossRefPubMed
  22. 22.↵
    1. VESPRINI A
    . Intelligenza Artificiale nella valutazione TC dell’ictus. 2023 Feb 20. [Epub ahead of print].
  23. 23.↵
    1. Hassan AE,
    2. Ringheanu VM,
    3. Rabah RR, et al
    . Early experience utilizing artificial intelligence shows significant reduction in transfer times and length of stay in a hub and spoke model. Interv Neuroradiol 2020;26:615–22. doi:10.1177/1591019920953055 pmid:32847449
    CrossRefPubMed
  24. 24.↵
    1. Hassan AE,
    2. Ringheanu VM,
    3. Preston L, et al
    . Artificial Intelligence–Parallel Stroke Workflow Tool Improves Reperfusion Rates and Door‐In to Puncture Interval. Stroke Vasc Interv Neurol 2022;2:e000224.
  25. 25.↵
    1. Elijovich L,
    2. Dornbos III D,
    3. Nickele C, et al
    . Automated emergent large vessel occlusion detection by artificial intelligence improves stroke workflow in a hub and spoke stroke system of care. J NeuroInterventional Surg 2022;14:704–8. doi:10.1136/neurintsurg-2021-017714 pmid:34417344
    Abstract/FREE Full Text
  26. 26.↵
    1. Goebel J,
    2. Stenzel E,
    3. Guberina N, et al
    . Automated ASPECT rating: comparison between the Frontier ASPECT Score software and the Brainomix software. Neuroradiology 2018;60:1267–72. doi:10.1007/s00234-018-2098-x pmid:30219935
    CrossRefPubMed
  27. 27.↵
    1. Austein F,
    2. Wodarg F,
    3. Jürgensen N, et al
    . Automated versus manual imaging assessment of early ischemic changes in acute stroke: comparison of two software packages and expert consensus. Eur Radiol 2019;29:6285–92. doi:10.1007/s00330-019-06252-2 pmid:31076862
    CrossRefPubMed
  28. 28.↵
    1. Hoelter P,
    2. Muehlen I,
    3. Goelitz P, et al
    . Automated ASPECT scoring in acute ischemic stroke: comparison of three software tools. Neuroradiology 2020;62:1231–8. doi:10.1007/s00234-020-02439-3 pmid:32382795
    CrossRefPubMed
  29. 29.↵
    1. Guberina N,
    2. Dietrich U,
    3. Radbruch A, et al
    . Detection of early infarction signs with machine learning-based diagnosis by means of the Alberta Stroke Program Early CT score (ASPECTS) in the clinical routine. Neuroradiology 2018;60:889–901. doi:10.1007/s00234-018-2066-5 pmid:30066278
    CrossRefPubMed
  30. 30.↵
    1. Brinjikji W,
    2. Abbasi M,
    3. Arnold C, et al
    . e-ASPECTS software improves interobserver agreement and accuracy of interpretation of aspects score. Interv Neuroradiol 2021;27:781–7. doi:10.1177/15910199211011861 pmid:33853441
    CrossRefPubMed
  31. 31.↵
    1. Delio PR,
    2. Wong ML,
    3. Tsai JP, et al
    . Assistance from Automated ASPECTS Software Improves Reader Performance. J Stroke Cerebrovasc Dis 2021;30:105829. doi:10.1016/j.jstrokecerebrovasdis.2021.105829 pmid:33989968
    CrossRefPubMed
  32. 32.↵
    1. Kobeissi H,
    2. Kallmes DF,
    3. Benson J, et al
    . Impact of e-ASPECTS software on the performance of physicians compared to a consensus ground truth: a multi-reader, multi-case study. Front Neurol 2023;14:1221255. doi:10.3389/fneur.2023.1221255 pmid:37745671
    CrossRefPubMed
  33. 33.↵
    1. Lambert J,
    2. Demeestere J,
    3. Dewachter B, et al
    . Performance of Automated ASPECTS Software and Value as a Computer-Aided Detection Tool. Am J Neuroradiol 2023;44:894–900. doi:10.3174/ajnr.A7956 pmid:37500286
    Abstract/FREE Full Text
  34. 34.↵
    1. Scopelliti G,
    2. Karam A,
    3. Labreuche J, et al
    . Internal carotid artery patency after mechanical thrombectomy for stroke due to occlusive dissection: Impact on outcome. Eur Stroke J 2023;8:199–207. doi:10.1177/23969873221140649 pmid:37021179
    CrossRefPubMed
  35. 35.↵
    1. Orscelik A,
    2. Matsukawa H,
    3. Elawady SS, et al
    . Comparative Outcomes of Mechanical Thrombectomy in Acute Ischemic Stroke Patients with ASPECTS 2-3 vs. 4-5. J Stroke Cerebrovasc Dis 2024;33:107528. doi:10.1016/j.jstrokecerebrovasdis.2023.107528 pmid:38134550
    CrossRefPubMed
  36. 36.↵
    1. Almallouhi E,
    2. Al Kasab S,
    3. Hubbard Z, et al
    . Outcomes of Mechanical Thrombectomy for Patients With Stroke Presenting With Low Alberta Stroke Program Early Computed Tomography Score in the Early and Extended Window. JAMA Netw Open 2021;4:e2137708. doi:10.1001/jamanetworkopen.2021.37708 pmid:34878550
    CrossRefPubMed
  37. 37.
    1. Chang PD,
    2. Kuoy E,
    3. Grinband J, et al
    . Hybrid 3D/2D Convolutional Neural Network for Hemorrhage Evaluation on Head CT. Am J Neuroradiol 2018;39:1609–16. doi:10.3174/ajnr.A5742 pmid:30049723
    Abstract/FREE Full Text
  38. 38.↵
    1. Ronneberger O,
    2. Fischer P,
    3. Brox T. U-Net
    : Convolutional Networks for Biomedical Image Segmentation. 2015 May 18. [Epub ahead of print].
  39. 39.↵
    1. Ayobi A,
    2. Chang PD,
    3. Chow D, et al
    . Validation of a Deep Learning AI-based Software for Automated ASPECTS Assessment. ECR 2023 EPOS doi:10.26044/ecr2023/C-19206.
    CrossRef
  40. 40.↵
    1. Cohen J
    . A coefficient of agreement for nominal scales. Educ Psychol Meas 1960;20:37–46. doi:10.1177/001316446002000104
    CrossRef
  41. 41.↵
    1. Ying G-S,
    2. Maguire MG,
    3. Glynn RJ, et al
    . Calculating Sensitivity, Specificity, and Predictive Values for Correlated Eye Data. Investig Opthalmology Vis Sci 2020;61:29. doi:10.1167/iovs.61.11.29 pmid:32936302
    CrossRefPubMed
  42. 42.↵
    1. Obuchowski NA,
    2. Rockette HE
    . Hypothesis testing of diagnostic accuracy for multiple readers and multiple tests an anova approach with dependent observations. Commun Stat - Simul Comput 1995;24:285–308. doi:10.1080/03610919508813243
    CrossRef
  43. ↵
    1. Shao J,
    2. Tu D
    43. Shao J, Tu D. The Jackknife and Bootstrap. New York, NY: Springer New York; 1995.
  44. 44.↵
    1. Samuelson FW,
    2. Taylor-Phillips S
    1. Smith BJ,
    2. Hillis SL
    . Multi-reader multi-case analysis of variance software for diagnostic performance comparison of imaging modalities. In: Samuelson FW, Taylor-Phillips S, eds. Medical Imaging 2020: Image Perception, Observer Performance, and Technology Assessment. Houston, United States: SPIE; 2020:18. doi:10.1117/12.2549075
    CrossRef
  45. 45.↵
    1. Koo TK,
    2. Li MY
    . A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research. J Chiropr Med 2016;15:155–63. doi:10.1016/j.jcm.2016.02.012 pmid:27330520
    CrossRefPubMed
  46. 46.↵
    1. Demaerschalk BM,
    2. Scharf EL,
    3. Cloft H, et al
    . Contemporary Management of Acute Ischemic Stroke Across the Continuum: From TeleStroke to Intra-Arterial Management. Mayo Clin Proc 2020;95:1512–29. doi:10.1016/j.mayocp.2020.04.002 pmid:32622453
    CrossRefPubMed
  47. 47.↵
    1. Saver JL
    . Time Is Brain—Quantified. Stroke 2006;37:263–6. doi:10.1161/01.STR.0000196957.55928.ab pmid:16339467
    Abstract/FREE Full Text
  48. 48.↵
    1. Saver JL,
    2. Goyal M,
    3. van der Lugt A, et al
    . Time to Treatment With Endovascular Thrombectomy and Outcomes From Ischemic Stroke: A Meta-analysis. JAMA 2016;316:1279–89. doi:10.1001/jama.2016.13647 pmid:27673305
    CrossRefPubMed
  • Received May 30, 2024.
  • Accepted after revision September 4, 2024.
  • © 2025 by American Journal of Neuroradiology
PreviousNext
Back to top

In this issue

American Journal of Neuroradiology: 46 (3)
American Journal of Neuroradiology
Vol. 46, Issue 3
1 Mar 2025
  • Table of Contents
  • Index by author
  • Complete Issue (PDF)
Advertisement
Print
Download PDF
Email Article

Thank you for your interest in spreading the word on American Journal of Neuroradiology.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Deep Learning–Based ASPECTS Algorithm Enhances Reader Performance and Reduces Interpretation Time
(Your Name) has sent you a message from American Journal of Neuroradiology
(Your Name) thought you would like to see the American Journal of Neuroradiology web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Cite this article
Angela Ayobi, Adam Davis, Peter D. Chang, Daniel S. Chow, Kambiz Nael, Maxime Tassy, Sarah Quenet, Sylvain Fogola, Peter Shabe, David Fussell, Christophe Avare, Yasmina Chaibi
Deep Learning–Based ASPECTS Algorithm Enhances Reader Performance and Reduces Interpretation Time
American Journal of Neuroradiology Mar 2025, 46 (3) 544-551; DOI: 10.3174/ajnr.A8491

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
0 Responses
Respond to this article
Share
Bookmark this article
DL ASPECTS & Reader Accuracy/Interpretation Time
Angela Ayobi, Adam Davis, Peter D. Chang, Daniel S. Chow, Kambiz Nael, Maxime Tassy, Sarah Quenet, Sylvain Fogola, Peter Shabe, David Fussell, Christophe Avare, Yasmina Chaibi
American Journal of Neuroradiology Mar 2025, 46 (3) 544-551; DOI: 10.3174/ajnr.A8491
del.icio.us logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
Purchase

Jump to section

  • Article
    • Graphical Abstract
    • Abstract
    • ABBREVIATIONS:
    • MATERIALS AND METHODS
    • RESULTS
    • DISCUSSION
    • CONCLUSIONS
    • Acknowledgments
    • Footnotes
    • References
  • Figures & Data
  • Supplemental
  • Info & Metrics
  • Responses
  • References
  • PDF

Related Articles

  • PubMed
  • Google Scholar

Cited By...

  • No citing articles found.
  • Crossref
  • Google Scholar

This article has not yet been cited by articles in journals that are participating in Crossref Cited-by Linking.

More in this TOC Section

  • Improving Hematoma Expansion Prediction Robustness
  • AI-Enhanced Photon-Counting CT of Temporal Bone
  • DIRDL for Inflammatory Myelopathies
Show more Artificial Intelligence

Similar Articles

Advertisement

Indexed Content

  • Current Issue
  • Accepted Manuscripts
  • Article Preview
  • Past Issues
  • Editorials
  • Editor's Choice
  • Fellows' Journal Club
  • Letters to the Editor
  • Video Articles

Cases

  • Case Collection
  • Archive - Case of the Week
  • Archive - Case of the Month
  • Archive - Classic Case

More from AJNR

  • Trainee Corner
  • Imaging Protocols
  • MRI Safety Corner
  • Book Reviews

Multimedia

  • AJNR Podcasts
  • AJNR Scantastics

Resources

  • Turnaround Time
  • Submit a Manuscript
  • Submit a Video Article
  • Submit an eLetter to the Editor/Response
  • Manuscript Submission Guidelines
  • Statistical Tips
  • Fast Publishing of Accepted Manuscripts
  • Graphical Abstract Preparation
  • Imaging Protocol Submission
  • Evidence-Based Medicine Level Guide
  • Publishing Checklists
  • Author Policies
  • Become a Reviewer/Academy of Reviewers
  • News and Updates

About Us

  • About AJNR
  • Editorial Board
  • Editorial Board Alumni
  • Alerts
  • Permissions
  • Not an AJNR Subscriber? Join Now
  • Advertise with Us
  • Librarian Resources
  • Feedback
  • Terms and Conditions
  • AJNR Editorial Board Alumni

American Society of Neuroradiology

  • Not an ASNR Member? Join Now

© 2025 by the American Society of Neuroradiology All rights, including for text and data mining, AI training, and similar technologies, are reserved.
Print ISSN: 0195-6108 Online ISSN: 1936-959X

Powered by HighWire