Skip to main content
We thank Mingjia et. Al for valuable feedback on our article. We would also like to express our gratitude to AJNR for providing the opportunity to response.
First, as mentioned in the limitations of our study, it did not reflect the real-world prevalence of iNPH. Since iNPH is a rare disease, we had to use an imbalanced dataset to include a sufficient number of iNPH patients. However, we believe that this study demonstrates the robustness of the AI model through results obtained using cross-validation and an unseen test dataset.
Secondly, it would be a valuable research direction to explore alternative AI approaches and compare them with the AI model we developed. Further research will be needed in this area.
Thirdly, regarding the point mentioned about the lack of external validation across diverse scan parameters and population characteristics, which questions the model’s adaptability to real-world clinical environments, as stated in the Methods section, we conducted cross-validation and provided results from both the training dataset and the unseen test dataset. The training dataset and unseen test dataset come from different hospitals, and the MRI vendors used are also different. Moreover, since the unseen test dataset includes data from various MRI machines, it demonstrates the robustness of the AI model in handling real-world heterogeneity.
Additionally, while the AI algorithm lacks interpretability due to its black-box nature, this AI model mitigates that weakness by using a decision tree. Decision tree is locally interpretable Model-Agnostic Explanations, which improves interpretability.1 This approach utilizes a logical structure similar to how neuroradiologists assess iNPH. This can be more easily understood by referring to Figure 1. Because of this, the model may have lower discriminative ability compared to other AI tools, but it offers higher interpretability. Lastly, while we believe that we have validated the model on a heterogeneous dataset, AI cannot be fully trusted, and thus, it cannot replace radiologists. However, we believe that the development of such models can assist neurologist and neuroradiologist in reducing underdiagnosis and misdiagnosis, ultimately benefiting the clinical environment.
Reference
1. Ennab, Mohammad, and Hamid Mcheick. Designing an interpretability-based model to explain the artificial intelligence algorithms in healthcare. Diagnostics 2022;12(7):1557.