Researchers fine-tuned the InternVL2-4B vision-language model on 20,000 angiogram images from 1,987 patients to automate coronary artery disease diagnosis. The AI achieved modest performance metrics: 60% F1-score for stenosis detection, 46% F1-score for anatomy segmentation, and 42% accuracy for full report generation. The system can identify blockages, label arterial anatomy, and generate clinical reports from angiographic images. This represents a significant advance in cardiac AI, as most previous systems only performed simple classification rather than comprehensive report generation. While the performance falls short of expert-level interpretation, the technology could prove valuable in resource-limited settings where specialist expertise is scarce, and for auditing intervention appropriateness. The moderate accuracy scores highlight current limitations—clinical deployment would require substantial improvement and rigorous validation. As this is a preprint awaiting peer review, these results remain preliminary and methodology details require expert scrutiny before clinical translation. The work demonstrates feasibility but underscores the complexity of automating nuanced medical image interpretation that directly impacts patient care decisions.
AI Vision Model Achieves 60% F1-Score in Coronary Angiogram Analysis
📄 Based on research published in medRxiv preprint
Read the original research →⚠️ This is a preprint — it has not yet been peer-reviewed. Results should be interpreted with caution and may change following peer review.
For informational, non-clinical use. Synthesized analysis of published research — may contain errors. Not medical advice. Consult original sources and your physician.