Graphical Abstract
Abstract
BACKGROUND AND PURPOSE: Fast, accurate detection of large (LVO) and medium vessel occlusion (MeVO) is critical for triage and management of acute ischemic stroke. Multiple artificial intelligence (AI)-based software programs are available commercially for automated detection and rapid prioritization of LVO. However, their ability, strengths, and limitations for detection of acute vessel occlusion in the context of expanding indications for mechanical thrombectomy are not entirely understood. We aimed to investigate the performance of a fully automated commercial detection algorithm to detect large and medium vessel occlusions in code stroke patients.
MATERIALS AND METHODS: We utilized a single-center, institutional, retrospective registry of all consecutive code stroke patients with CTA and automated processing by using Viz.ai presenting at a large comprehensive stroke center between March 2020 and February 2023. LVO was categorized as anterior LVO (aLVO), defined as occlusion of the intracranial ICA or M1-MCA, and posterior LVO (pLVO), defined as occlusion of the basilar artery or V4-vertebral artery. MeVO was defined as occlusion of the M2-MCA, A1/A2-anterior cerebral artery, or P1/P2-posterior cerebral artery. We compared the accuracy of Viz.ai by using STARD guidelines. Radiology reports from 12 board-certified radiologists were considered the reference standard. Our primary outcome was assessing accuracy of the automated software for aLVO. Our secondary outcome was assessing accuracy for 3 additional categories: all LVO (aLVO and pLVO), aLVO with M2-MCA, and aLVO with MeVO.
RESULTS: Of 3590 code stroke patients, 3576 were technically sufficient for analysis by the automated software (median age 67 years; 51% women; 68% white), of which 616 (17.2%) had vessel occlusions. The respective sensitivity and specificity for our prespecified categories were: aLVO: 91% (87–94%), 93% (92–94%); all LVO: 73% (68–77%), 92% (91–93%); aLVO + M2-MCA occlusion: 74% (70–78%), 93% (92–94%); and aLVO + all MeVO: 65% (61–69%), 93% (92–94%).
CONCLUSIONS: The automated algorithm demonstrated high accuracy in identifying anterior LVO with lower performance for pLVO and MeVO. It is crucial for acute stroke teams to be aware of the discordance between automated algorithm results and true rates of LVO and MeVO for timely diagnosis and triage.
ABBREVIATIONS:
- ACA
- anterior cerebral artery
- AI
- artificial intelligence
- aLVO
- anterior large vessel occlusion
- BA
- basilar artery
- EVT
- endovascular thrombectomy
- IQR
- interquartile range
- LVO
- large vessel occlusion
- MeVO
- medium vessel occlusion
- PCA
- posterior cerebral artery
- pLVO
- posterior large vessel occlusion
- VA
- vertebral artery
SUMMARY
PREVIOUS LITERATURE:
Several commercial AI-based tools for automated detection of LVO are widely used for triaging code stroke patients, often with minimal guidance from the manufacturers regarding their limitations. While most of these tools have been trained, programmed, and FDA-approved for detecting aLVO, which includes ICA and M1-MCA, they are frequently used for all code stroke patients in real-world clinical practice. Multiple studies have evaluated the sensitivity and specificity of these tools for aLVO detection. However, there has been limited analysis of their modest positive predictive value, as well as the trends in false-positive and false-negative results.
KEY FINDINGS:
This study systematically examines the performance of AI-based software (Viz.ai) in various categories of vessel occlusions, especially in the context of the evolving landscape of thrombectomy that surpasses the current capabilities of many AI platforms. Analysis in a large real-world sample of patients with acute ischemic stroke revealed high sensitivity for detecting aLVO, with a modest positive predictive value. Worse metrics were noted when potentially treatable M2s and pLVO were included. Horizontal dominant M2-MCA occlusions were more frequently detected than nondominant horizontal or vertical M2-MCA occlusions. Finally, the study presents a detailed analysis of false-positive and false-negative results by AI.
KNOWLEDGE ADVANCEMENT:
Acute stroke providers should be aware of the limitations of automated vessel occlusion detection software, particularly in the setting of rapidly expanding indications for thrombectomy. The study serves as a knowledge base for understanding and improving these technologies.
Commercially available artificial intelligence (AI) software is a valuable tool for the rapid identification and management of patients with stroke due to large vessel occlusion (LVO). Such software has been instrumental in reducing the time taken to transfer patients to comprehensive stroke centers and in expediting the initiation of thrombectomy procedures, as evidenced by improved time from symptom onset to groin puncture.1
Automated vessel occlusion software is currently widely used at both primary and comprehensive stroke centers. One notable example of such software is Viz.ai, which leverages a convolutional neural network for image recognition tasks to detect LVO from the ICA to the Sylvian fissure with high sensitivity (82%) and specificity (94%).2,3
Multiple AI-based software programs are available commercially for automated detection and rapid prioritization of LVO cases for management and/or referral. However, their ability, strengths, and limitations for detection of acute vessel occlusion in the context of expanding indications for mechanical thrombectomy are not entirely understood. Though AI tools were introduced predominantly to detect anterior circulation LVO (aLVO, ie, ICA and M1 occlusions), in real-world practice, the tools are utilized for all code stroke patients. This creates a potential for acute stroke teams to rely on these tools to detect all vessel occlusions, including detection of posterior circulation and medium vessel occlusion, rather than aLVO alone. Further, given the efficacy of endovascular thrombectomy (EVT) to treat aLVO and advances in device technology, there is an increase in the treatment of medium vessel occlusion (MeVO), particularly M2 occlusions.4 Hence, with the widespread use of AI tools and expanding indications of EVT, it is important to understand the software’s capability to detect occlusion in vessels in addition to aLVO.
Our primary objective was to determine the accuracy of an automated detection tool for anterior LVO, defined as ICA and M1, in real-world code stroke CTA. Our secondary objective was to determine the accuracy of the automated tool to additionally identify MeVO and posterior vessel occlusion in the same population. We hypothesized that the software tool would have a high accuracy in detecting terminal ICA and M1 MCA occlusions, with lower performance in detecting posterior LVO and all MeVOs.
MATERIALS AND METHODS
Population
We performed a retrospective analysis of prospectively collected data from all consecutive code stroke patients who underwent head CTA between March 2020 and February 2023 at a large, comprehensive stroke center comprising 2 distinct sites: site A and site B. Inclusion criteria were: 1) code stroke patients aged 18 years or older, 2) CTA head imaging performed within 24 hours of last known well time, and 3) automated algorithm LVO output by using Viz.ai (https://www.viz.ai/) (index test) included with CTA acquisition. Technically, suboptimal CTAs due to motion, inadequate or missed contrast bolus, or other artifacts precluding appropriate assessment of the intracranial vasculature were not interpreted by the software and, therefore, excluded from analysis. The local Institutional Review Board approved the study, and informed consent was waived because of the retrospective study design. We followed the STARD guidelines.5
Variables
We recorded patient demographics, including age, sex, race, and ethnicity. Clinical characteristics and vascular risk factors, including diabetes mellitus, hypertension, hyperlipidemia, congestive heart failure, atrial fibrillation, smoking history, and prior history of stroke from the electronic medical record were also collected. Among the imaging variables, we recorded the site of vessel occlusion based on the reports of board-certified radiologists.
Image Acquisition and Workflow
The CTAs were acquired on 4 256-slice scanners with 80 detectors. Images were acquired in the axial plane with a slice thickness of 0.625 mm. All CTAs were acquired as a single arteriovenous phase contrast study with a 60 mL intravenous contrast bolus administered at a rate of 5 mL/s by using bolus tracking triggered from the aortic arch with aortic arch to vertex coverage. These images were reformatted in coronal and sagittal planes postacquisition. The postprocessing also included generating MIP images in all 3 orthogonal planes. CTA images of all code stroke patients were routed to Viz.ai software (Version 1.84.0) directly from the CT scanner. A binary outcome of suspected LVO versus no suspected LVO was produced by the algorithm. If LVO is suspected, an alert is generated on the mobile application for the stroke care team. The software-generated image of the segmented anterior vasculature was also instantly routed to the PACS. The software’s interpretation of the study, however, was not available to the radiologists reading the CTA study. Per institutional protocol, CT perfusion was only performed for select cases presenting between 6–24 hours of last known well or wake-up LVO cases being considered for mechanical thrombectomy. These studies were available to the radiologist at the time of CTA interpretation. The final CTA report was considered as the reference standard. A neuroradiology fellow (A.S.) independently adjudicated all cases in which the AI identified a vessel occlusion and the clinical radiologist did not (ie, false-positive cases).
Categorization of Vessel Occlusions
Based on the radiology reports, we categorized vessel occlusions into 4 groups, starting with aLVO and then adding on discrete areas of vessel occlusion with clinical relevance:
Anterior LVO (aLVO): Acute occlusion of intracranial ICA (extending from petrous segment to ICA terminus) and M1 segment of MCA (defined as MCA extending from origin at the ICA terminus to the MCA bifurcation).6
All LVO (all LVO): Anterior and posterior circulation LVOs (aLVO and pLVO), with pLVO, including basilar artery (BA) and intracranial vertebral artery (V4 segment) occlusions.
Anterior LVO with M2-MCA occlusion (aLVO + M2): aLVO with M2-MCA occlusions, defined as any occlusion identified distal to MCA bifurcation and inclusive of the Sylvian segment.
Anterior LVO with all MeVOs (aLVO + all MeVO): Anterior LVO with all MeVO, inclusive of occlusions in the M2-MCA, A1/A2 anterior cerebral artery (ACA), or P1/P2 posterior cerebral artery (PCA).
M2-MCA occlusions were further trichotomized into horizontal M1-like M2 occlusion, horizontal non-M1-like M2 occlusion, and vertical M2 segment occlusion. M1-like M2 was defined as a dominant M2 or when one M2 branch’s caliber was more than the caliber of another M2 branch by over 50%. This classification of M2 morphology was similar to the M2 morphology categorization used for other ongoing randomized controlled trials at the institution’s radiology research core laboratory. The neuroradiology fellow (A.S.) was trained by the senior author (A.V.) for morphologic classification of M2. After reviewing multiple cases with appropriate agreement, the fellow proceeded to assess the M2 morphology in all the cases with M2-MCA occlusions.
Analyses and Statistics
Patient demographics (Table 1), clinical characteristics, vascular risk factors, and imaging interpretations of interest were reported as mean ± standard deviation or median with interquartile range (IQR) for continuous variables and frequencies and percentages for the categoric variables. The primary outcome was the diagnostic performance of the automated algorithm for detecting aLVO compared with the reference standard of board-certified radiologists. Secondary outcomes were performance metrics for the detection of occlusions beyond the aLVO, inclusive of pLVO (ie, all LVO), M2s (ie, aLVO + M2), and any additional anterior or posterior circulation MeVO (ie, aLVO + all MeVO).
Demographics
Accuracy metrics, including sensitivity, specificity, positive predictive value, and negative predictive value, were calculated for each vessel category (Table 2). Additionally, precision recall curve analyses were performed for each vessel category (Supplemental Data). The study was performed according to the STARD guidelines for assessing accuracy of the index test5 (Supplemental Data). All statistical analyses were performed by using SAS Version 9.4 (SAS Institute). A P value < .05 was considered statistically significant.
Accuracy metrics of an automated AI tool’s performance
RESULTS
A total of 3590 consecutive code stroke patients underwent CTA and automated algorithm detection analysis between March 2020 and February 2023. Fourteen of these cases were flagged by the automated AI tool as technically inadequate and were, therefore, excluded from analysis, leaving 3576 patients (2442 patients from site A and 1134 patients from site B) who met the inclusion criteria (Fig 1). Of these patients, 50.4% were women, 67.9% were white, and 27.7% were black, with a median age of 67 (IQR: 54–82). Patient demographics, clinical characteristics, and radiographic characteristics are summarized in Table 1.
Flowchart.
Acute vascular occlusions were identified in 616 (17.2%) patients by radiology reports: 301 aLVO (126 ICA, 175 M1), 89 pLVO (31 BA, 58 VA), 184 anterior MeVO (164 M2, 20 A1/A2), and 42 posterior MeVO (20 P1-PCA and 22 P2-PCA). The automated AI tool alerted a possible vessel occlusion in 505 cases (Fig 2). A total of 263 (8.6%) cases were false-negatives (ie, presence of vascular occlusion per the radiology report but not identified by the automated AI tool). Of the 2960 cases discriminated as non-LVO by radiologists, 152 were incorrectly identified by the software as LVO (ie, false-positive cases). Independent adjudication of all 152 false-positive cases by a neuroradiology fellow (A.S.) indicated the true absence of LVO.
Automated vessel occlusion detection.
Anterior LVO
Anterior LVO was identified in 301 radiology reports, of which 126 were ICA, and 175 were M1 occlusions. Tandem occlusions, defined as discrete ICA and M1 occlusions with an intervening segment of normal vessel caliber and vascular flow comprised 17 of these cases. The automated AI tool detected 271 (90.1%) aLVOs. The performance metrics and specifications of the aLVO missed by the AI tool are elaborated in Tables 2 and 3. All 17 cases of tandem occlusions were detected by the AI tool.
Descriptive analyses of vessel occlusions not detected by the AI tool
All LVO (Anterior and Posterior LVO)
Intracranial LVO was identified in 390 radiology reports, of which 89 were pLVO (31 BA and 58 V4). The performance metrics and specifications of the aLVOs and pLVOs not detected by the AI tool are elaborated in Tables 2 and 3. The automated algorithm did not detect 93.5% of all BA occlusions (Fig 3) and 84.5% of all V4 occlusions. All 11 pLVO cases alerted by the automated algorithm had an associated chronic occlusion or severe stenosis in the proximal anterior circulation. It is likely that the pathology in the anterior circulation prompted the software to detect LVO rather than true identification of pLVO. Analysis of software detected MeVOs other than M2 also resulted in a similar finding.
Representative examples of false-negative cases by using an automated AI algorithm. A and B, CT angiography image demonstrates occlusion of distal basilar artery and bilateral proximal posterior cerebral arteries (arrow in A). The automated AI algorithm reconstruction of vessel tree (B) did not detect an occlusion. C and D, CT angiography image at the level of skull base demonstrates occlusion of left petrous and intracavernous ICA (arrow in C) with reconstitution in supraclinoid segment (arrow in D). This occlusion was not detected by the automated AI algorithm.
Anterior LVO + M2
Anterior LVO + M2 segment occlusions were identified by radiologists in 465 cases, of which 164 were M2 occlusions. The automated AI tool was able to successfully detect 66 (40.2%) of the 164 M2 occlusions. Further analysis of all 164 M2 occlusions according to their location and orientation revealed 47 horizontal dominant (ie, “M1-like”) M2 occlusions, 38 nondominant (ie, non “M1-like”) M2 horizontal occlusions, and 79 vertical M2 segment occlusions (Fig 2). The performance metrics and specifications of the occlusions not detected by the AI tool are elaborated in Tables 2 and 3.
Anterior LVO + MeVO
Anterior LVO with all MeVO were identified in 527 (85.6%) of all occlusions detected. MeVOs were detected in 226 patients, including 164 M2, 5 A1, 15 A2, 20 P1, and 22 P2 segment occlusions. The performance metrics and specifications of the occlusions not detected by the AI tool are elaborated in Tables 2 and 3. All the ACA and PCA occlusions detected by the automated AI algorithm had an associated chronic occlusion or severe stenosis in the proximal anterior circulation.
Among the 152 false-positive cases (30.1% of all positive alerts by the automated tool) in the entire study, 49 (32.2%) were chronic occlusions, including 6 instances of Moyamoya disease, 24 (15.8%) had atherosclerotic luminal narrowing/irregularity of the anterior large vessels, 8 (5.3%) had cervical ICA occlusion with no concurrent intracranial vessel occlusion (possibly due to poor ipsilateral intracranial vascular contrast opacification), 7 (4.6%) scans were compromised by motion impairment, and 4 (2.6%) had mass effect due to an intracranial space-occupying lesion. There were 2 cases each of saccular aneurysm at MCA bifurcation, dolichoectasia (Fig 4), venous contamination, intracranial foci of contrast extravasation, and low ejection fraction leading to poor contrast bolus. Most of the false-positive cases with atherosclerotic plaque burden demonstrated moderate (approximately 50%–70%) luminal stenosis of the cavernous and supraclinoid ICA segments and the ICA terminus. In the remaining 50 (32.9%) false-positive cases, no definite reason potentially leading the AI algorithm to detect a vessel occlusion could be identified on imaging review.
Representative examples of false-positive cases by automated AI algorithm. A and B, CT angiography images demonstrating dolichoectatic vessels traversing in and out of the axial planes (arrows), falsely perceived as occlusion by the automated AI algorithm. C and D, CT angiography image demonstrating a saccular aneurysm (blue arrow in C) at the right distal MCA, pointing anteriorly adjacent to its bifurcation. No flow-limiting stenosis or occlusion was noted. Vessel tree segmented and reconstructed by automated AI algorithm (D) falsely detected an occlusion.
DISCUSSION
In this real-world cohort, automated detection software (Viz.ai) demonstrated high accuracy for aLVO agreement with board-certified radiologists, with lower agreement when M2-MCA, pLVO, or all MeVOs were included. We found the sensitivity and specificity for detecting anterior circulation LVO were 91% and 93%, respectively. These results are consistent with the manufacturer’s stated performance and multiple prior studies in the literature.2,7,8 However, the AI tool demonstrated a modest positive predictive value of 64.1% for anterior LVO detection, given the relatively high number of false-positive cases. These findings are in alignment with results from previously published studies performed by using the same AI tool (positive predictive value 48%–77%) and a different AI tool (RAPID AI; iSchemaView, positive predictive value 43%).2,9⇓-11
When M2 segment occlusions are added to the aLVO category, the sensitivity decreases to 74%. Other studies have demonstrated similar accuracy of the automated AI algorithms for anterior LVO and M2 occlusion detection, with sensitivity ranging between 68.5% and 74.6%,7,8 suggesting that automated vessel detection tools can miss a sizable percentage of potentially treatable anterior vascular occlusions. Given the increasing rates of M2 EVT,12 the lower accuracy of automated software to detect aLVO and M2-MCA occlusions highlights the discrepancy between real-world practice and the ability of automated software to consistently detect occlusion in this location.4
M2-MCA morphology can be highly variable, with some proximal M2-MCAs being more similar to M1-MCA occlusions (“M1-like” M2) than other MeVOs.13 We attempted to analyze the M2-MCA in further detail based on the M2 morphology and M2 orientation (horizontal versus vertical segment) at the occlusion site that might influence the automated detection. Most of the M2 occlusions missed by the automated AI algorithm were in the vertical or insular M2 segment (62.2%), and most of the M2 occlusions detected were in the horizontal M2 segment (72.7%). Horizontal M2 occlusions were further dichotomized into M1-like M2 and non M1-like M2 morphology. We found 74.5% of “M1-like” M2 occlusions were detected by the automated AI tool compared with 34.2% of horizontal non “M1-like” M2 occlusions. This difference in detection rate is intuitive, as a larger caliber vessel occlusion and consequent difference in contrast opacification are more likely to be interpreted correctly by the software. Our findings highlight a lower detection rate of occlusion in vertical M2 and non “M1-like” M2 occlusions (26.5%) compared with “M1-like” M2 occlusions (74.5%).
We found that the sensitivity for aLVO and pLVO detection (73%) is considerably lower compared with only aLVO detection (91%). Other studies have found similar results when posterior circulation vascular occlusions were included in the performance analyses.7 EVT in acute BA occlusion is associated with improved functional outcomes, reiterating the significance of prompt pLVO detection and notification.14,15 Given the high mortality associated with BA occlusion and frequently atypical symptoms associated with these occlusions, strong caution should be taken to avoid delayed diagnoses.16 Of note, we included V4 occlusions with BA occlusions in the pLVO category. Of the 2 recent randomized clinical trials to support EVT in BA occlusion, only a small subset of vessel occlusions in the EVT for acute BA occlusion trial were V4 occlusions.14,17 Our findings demonstrate that centers routinely performing EVT on only aLVO and pLVO should be aware that reliance on this and other AI platforms, which are not trained to detect pLVO, could lead to delays in diagnosis for occlusions in this location.
Expectedly, the sensitivity of the automated software to detect occlusion was even lower when anterior and posterior MeVOs were added. The sensitivity for this subgroup was 65% in our study, similar to the sensitivity for this subgroup (59.4%) reported by previous studies.7 Posterior circulation LVOs and MeVOs are not included in the segmentation and analysis by automated AI algorithms, resulting in 78 (87.6%) missed posterior LVOs and 38 (90.5%) missed posterior MeVOs in our series.
The high number of false-positives (30% of all positive alerts) can likely be attributed to the emphasis on achieving high sensitivity for automated occlusion detection. Most of the false-positive cases can be attributed to nonemergent intracranial radiologic findings as described above, most importantly, atherosclerosis and chronic occlusions.
Our study is strengthened by the analysis of a large data set obtained from multiple locations comprising the region’s largest comprehensive stroke center. CTA studies included in the analyses were reported by board-certified neuroradiologists and emergency radiologists, closely mirroring the typical work distribution among radiologists at numerous stroke centers nationwide. Additionally, we characterized the morphology and anatomic location of M2 occlusions that were more likely to be detected by AI, an analysis that is novel to the literature. Finally, we categorized vascular occlusions based on the changing landscape of EVT.
Given the rapidly expanding indications for EVT, providers, particularly at experienced centers, frequently change their practice in advance of formal guideline changes following landmark studies.14,15,18,19 This creates a dynamic in which BA and M2 thrombectomy, depending on the clinical scenario, are already considered treatment options. Reliance upon automated vessel detection tools without awareness of their limitations could create diagnostic delays. Additionally, while AI is an extremely valuable tool for early detection, commercial software cannot take patient presentation or context into consideration like human assessment. For these reasons, the manufacturer of this and many other software programs utilized in stroke have marketed their product as a tool for assessment rather than diagnosis.
Our study is limited by retrospective design from patients at a single academic center by using a single AI detection system. Importantly, we did not investigate the effect of false-positive and false-negative results produced by the software on key clinical time metrics, such as time to groin puncture, or on the decision-making and patient outcomes. Though all cases included in this analysis were within 24 hours of last known well, the time from symptom onset to imaging for individual cases was not available. Further, CTP, which can aid in the diagnosis of vessel occlusion, was not available for all studies analyzed based on institutional protocols limiting its use to 6–24 hours of last known well. Additionally, the accuracy of LVO discrimination of the software may be influenced by uniform CT quality metrics and true LVO incidence inherent to a single-center study.
CONCLUSIONS
Evaluation of the ability of AI to detect vessel occlusions in a large real-world sample of patients with acute ischemic stroke revealed high sensitivity for detecting aLVO, with a modest positive predictive value. Worse metrics were noted when potentially treatable M2s and pLVO were included. Acute stroke providers should be aware of the limitations of automated vessel occlusion detection software, particularly in the setting of rapidly expanding indications for EVT.
Footnotes
Disclosure forms provided by the authors are available with the full text and PDF of this article at www.ajnr.org.
References
- Received May 29, 2024.
- Accepted after revision September 20, 2024.
- © 2025 by American Journal of Neuroradiology