Position: Meta-Governance and Specialty Society Leadership Are Essential to Responsibly Implement Generative AI in Cardiovascular Care
Abstract
Generative artificial intelligence (AI), particularly large language models (LLMs), introduces new opportunities in cardiology but requires careful implementation to ensure safety, equity, and accountability. Early applications have focused on workflow support and patient education, but extending LLMs to diagnostic or management tasks raises challenges not seen with drugs, devices, or conventional machine learning. Four barriers define the current landscape. First, external validation, the gold standard for cardiovascular risk models, is inadequate for LLMs, which require recurring local validation. Second, outputs vary with prompt phrasing, data format, and user expertise, yet current Food and Drug Administration (FDA) guidance does not address these contextual factors. Professional cardiology societies such as the American Heart Association (AHA), American College of Cardiology (ACC), and European Society of Cardiology (ESC) could help fill this gap by establishing standards to reduce ambiguity and improve reproducibility. Third, LLMs lack structured governance to ensure accuracy and equity over time, unlike drugs and devices that undergo post market surveillance. Fourth, regulatory and reimbursement clarity to incentivize safe deployment remain absent. Coordinated regulation and specialty society leadership that convene multidisciplinary stakeholders under meta-governance are essential for the responsible implementation of generative AI in cardiovascular care.