← Literature Review

Misinterpretation of how AI works

Users often misunderstand how AI systems work, leading to misplaced trust, uncritical acceptance of outputs, and epistemic short-circuits between perceived credibility and actual reliability.

Generative AI in the context of assistive technologies: Trends, limitations and future directions
ScienceDirect ↗
LLMs like GPT-3 face several challenges related to missing transparency due to the complexity of their internal architectures, lack of interpretability, and their "black box" nature. This missing transparency makes it difficult to understand how specific inputs are processed to produce outputs.
How AI Literacy Shapes GenAI Use
Nielsen Norman Group ↗
In our study, some users who were skilled at prompting often failed to check error-prone information (such as pricing or instructions for using a user interface). This behavior suggests that they may not have been fully aware of genAI's limitations.
Knowledge–Receptivity Paradox: lower AI conceptual knowledge predicted higher receptivity to using AI — users with lower conceptual knowledge were more likely to see AI as "magical."
Checking AI's Work: While novices and naive power users were more open to using genAI for their information-seeking tasks, they were also less discerning with the information they received from genAI. We observed that they were more delighted and impressed with the results they received, and more likely to accept them without scrutiny.
Epistemia
LinkedIn ↗
An LLM can generate, using the same academic tone and style, both an accurate explanation of the placebo effect and a fabricated paraphrase about "water memory" as if it were a fact. Both sound true. But only one is.
Epistemology reveals a short circuit between perceived credibility and actual reliability. Content may seem true to us, not because it is, but because its linguistic form reminds us of people who usually say true things.
LLMs are optimized to produce responses consistent with context, not necessarily the truest one. This can lead to sycophancy: the tendency of models to confirm what they believe the interlocutor wants to hear.

Browse other topics