The growing integration of AI chatbots into health decision-making faces a critical barrier: people's existing beliefs may determine whether they actually learn accurate information, even when using sophisticated verification tools. This finding challenges assumptions that AI assistance automatically improves health literacy across all users.

When 103 participants used ChatGPT to fact-check deliberately mixed-accuracy Facebook posts about gluten-free diets, source credibility markers proved surprisingly ineffective on their own. Expert-labeled sources failed to enhance objective knowledge compared to non-expert sources. However, participants who already held favorable attitudes toward the health topic showed significantly better knowledge gains when information came from expert sources. The AI verification process itself did not override these pre-existing cognitive biases.

This research exposes a fundamental limitation in how people process health information, even with AI support. The dual-process theory of cognition appears to govern AI-assisted verification just as it does traditional information processing—people still rely heavily on mental shortcuts and existing beliefs rather than purely analytical thinking. For health communication, this suggests that AI tools may inadvertently amplify existing disparities in health knowledge rather than democratizing access to accurate information. The implications are particularly concerning for public health campaigns, where AI-assisted verification was hoped to help combat misinformation universally. Instead, these tools may be most effective for individuals who already possess favorable attitudes toward evidence-based health information, potentially widening the gap between health-literate and health-illiterate populations. The study's small sample size limits broader generalizability, but the pattern suggests AI integration in health communication requires more nuanced strategies that account for individual cognitive predispositions.