← Literature Review

Misinterpretation of how AI works

Users often misunderstand how AI systems work, leading to misplaced trust, uncritical acceptance of outputs, and epistemic short-circuits between perceived credibility and actual reliability.

Generative AI in the context of assistive technologies: Trends, limitations and future directions
ScienceDirect ↗
LLMs like GPT-3 face several challenges related to missing transparency due to the complexity of their internal architectures, lack of interpretability, and their "black box" nature. This missing transparency makes it difficult to understand how specific inputs are processed to produce outputs.
How AI Literacy Shapes GenAI Use
Nielsen Norman Group ↗
In our study, some users who were skilled at prompting often failed to check error-prone information (such as pricing or instructions for using a user interface). This behavior suggests that they may not have been fully aware of genAI's limitations.
Knowledge–Receptivity Paradox: lower AI conceptual knowledge predicted higher receptivity to using AI — users with lower conceptual knowledge were more likely to see AI as "magical."
Checking AI's Work: While novices and naive power users were more open to using genAI for their information-seeking tasks, they were also less discerning with the information they received from genAI. We observed that they were more delighted and impressed with the results they received, and more likely to accept them without scrutiny.
Epistemia
LinkedIn ↗
Un LLM può generare, con lo stesso tono e stile accademico, sia una spiegazione accurata dell'effetto placebo sia una parafrasi inventata sulla "memoria dell'acqua" come fosse un dato di fatto. Entrambe suonano vere. Ma solo una lo è.
L'epistemia: un corto circuito tra credibilità percepita e affidabilità reale. Un contenuto può sembrarci vero, non perché lo sia, ma perché la sua forma linguistica ci ricorda quella di chi solitamente dice cose vere. È un riflesso culturale, non un atto critico.
I LLM non solo generano testi plausibili. Lo fanno assecondando l'utente. Nel gergo tecnico si chiama sycophancy: la tendenza dei modelli a confermare ciò che credono l'interlocutore voglia sentirsi dire.

Browse other topics