Posted by SASTA
on 02/02/2026
Joanne Villis, Director of Technology Enrichment, St Dominic's Priory College
Republished with permission - LinkedIn article
Introduction: Why Terminology Matters
After reading Nick Potkalitsky’s recent piece, Beyond True or False: Teaching Students to Interrogate AI Unreliability (LINK), I found myself returning to comments from several people I’ve spoken to lately who mentioned that we shouldn’t call AI errors “hallucinations”.
So why do we call AI errors “hallucinations”, and is it the wrong word to use in education?
My argument is simple: the term ‘hallucination’ misrepresents how AI works, and educational contexts require language that supports—not distorts—students’ understanding of AI systems. If we want students to interrogate AI outputs critically, our vocabulary must accurately reflect how these systems generate errors.
The term “hallucination” has become widespread in discussions about generative AI. But its origins, implications, and limitations deserve closer attention, especially for those of us supporting students and teachers to develop critical AI literacy.
1. Where the Term “Hallucination” Came From
The term hallucination did not originate inside computer science. It was borrowed from psychology and human cognition to describe a very different phenomenon: AI producing incorrect, fabricated, or unsupported information while sounding confident.
“In the late 2010s, the term underwent a semantic shift to signify the generation of factually incorrect or misleading outputs by artificial intelligence systems”(LINK).
Early LLM research used the term as a metaphor. Papers from the late 2010s and early 2020s defined hallucination in AI as plausible but factually incorrect content generated by a model. According to Rawte et al. (2023):
“the majority of these falsehoods are widely recognized as hallucination, which can be defined as the generation of content that deviates from the real facts, resulting in unfaithful outputs”(p. 2541)- LINK.
Early research described hallucination as a newly emerging problem, noting that it:
“parallelly emerged… posing significant concerns” (Rawte et al., 2023, p. 2541).
Experts also observed that AI-generated content:
“can seem convincingly human-like,” (Anil Seth, quoted in Wikipedia: AI Hallucination),
contributing to the perception that these errors resembled confident human expression.
In addition, researchers used the term as an umbrella category, describing:
“the majority of these falsehoods” under one label (Rawte et al., 2023, p. 2541)
and explicitly grouping six types of hallucinations within the same terminology.
Because this single term conveniently bundled together many unrelated error types, it spread rapidly through early AI discourse, even before educators began deeply engaging with AI.
Six Types of Hallucination (Rawte et al., 2023)
- Numeric Nuisance (NN) – Incorrect numeric information “Numeric Nuisance (NN): This issue occurs when an LLM generates numeric values related to past events, such as dates, ages, or monetary amounts, that are inconsistent with the actual facts.” (Rawte et al., 2023, p. 2543)
- Acronym Ambiguity (AA) – Incorrect expansion of acronyms “Acronym Ambiguity (AA): This issue pertains to instances in which LLMs generate an imprecise expansion for an acronym.” (Rawte et al., 2023, p. 2543)
- Generated Golem (GG) – Inventing people or entities that do not exist “Generated Golem (GG): This issue arises when an LLM fabricates an imaginary personality in relation to a past event, without concrete evidence.” (Rawte et al., 2023, p. 2543)
- Virtual Voice (VV) – Inventing quotes “Virtual Voice (VV): At times LLMs generate quotations attributed to either fictional or real characters without sufficient evidence to verify the authenticity of such statements.” (Rawte et al., 2023, pp. 2543–2544)
- Geographic Erratum (GE) – Incorrect geographical information “Geographic Erratum (GE): This problem occurs when LLMs generate an incorrect location associated with an event.” (Rawte et al., 2023, p. 2544)
- Time Wrap (TW) – Mixing or confusing timelines “Time Wrap (TW): This problem entails LLMs generating text that exhibits a mashed fusion of events from different timelines.” (Rawte et al., 2023, p. 2544)
2. Why We Should Reconsider Using “Hallucination” in Education
2a. Anthropomorphism
Several recent analyses argue that the term is misleading because it anthropomorphizes the model.
“Anthropomorphism is problematic when it involves the misleading attribution of human properties to systems that lack those properties, giving rise to false expectations for how the system will behave”(Shanahan, 2024, p. 2).
Their core argument is that the metaphor gives AI human-like qualities it does not possess.
A hallucination is a human psychological event involving perception, meaning-making, and subjective experience. AI systems do not “perceive,” “imagine,” or “experience.”
Using the term “hallucination” therefore risks reinforcing the misconception that AI has internal mental states, exactly the misunderstanding educators are trying to avoid.
2b. Mechanisms, Not Metaphors
LLMs work via token prediction, not perception. They simply predict the next word (token) based on statistical patterns in their training data.
Where hallucination implies a failure of perception, an AI “error” is a failure of prediction.
When AI generates incorrect information, it is not because the model believes or imagines something. Rather, the model:
- has gaps in training data
- misinterprets the prompt
- overgeneralises a pattern
- or produces an unsupported prediction
This demonstrates that so-called ‘hallucinations’ are outputs of statistical prediction mechanisms, not failures of perception or cognition.
Educators benefit when students understand that LLMs operate through probability, not intention.
2c. Why This Matters for Students
Before we can improve students’ AI literacy, we must ensure the language we use to describe AI behaviour reflects how these systems actually work.
If we want students to adopt Potkalitsky’s call to “interrogate AI unreliability”, then the terminology must support analytical thinking, not metaphors that blur the line between human and machine cognition.
Choosing precise vocabulary:
- reinforces correct mental models
- supports ethical and critical use
- demystifies AI behaviour
- empowers students to question outputs
- reduces the risk of over-trusting AI
We should be describing underlying mechanisms, not metaphors. Technical terms such as:
- model limitations
- data gaps
- statistical uncertainty
- unreliable or invalid outputs
- bias
A shift toward mechanism-based language is essential if students are to develop accurate and critical AI literacy.
Where, then, is the pedagogical guidance to help teachers talk about AI errors accurately and responsibly?”
If we want students to question AI outputs effectively, then educators must model vocabulary that makes the system’s behaviour transparent, not mystified by metaphor.
The language we choose either equips students with critical tools or obscures the mechanisms they need to understand.
Reference List
Rawte, V., Patwa, P., Nair, R., & Bhattacharyya, P. (2023). Hallucinations in Large Language Models: A Taxonomy and Survey. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2539–2554. https://aclanthology.org/2023.emnlp-main.155/
Shanahan, M. (2024). Anthropomorphism in Artificial Intelligence. Inquiry. Advance online publication. https://doi.org/10.1080/0020174X.2024.2434860
Wikipedia contributors. (2024). Hallucination (artificial intelligence). In Wikipedia, The Free Encyclopedia. https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
Potkalitsky, N. (2024). Beyond true or false: Teaching students to interrogate AI unreliability. LinkedIn. https://www.linkedin.com/pulse/beyond-true-false-teaching-students-interrogate-ai-potkalitsky-phd-r3tye/
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need.Advances in Neural Information Processing Systems, 30.https://arxiv.org/abs/1706.03762
In this Section
Archive
- February 2026
- January 2026
- December 2025
- November 2025
- October 2025
- September 2025
- August 2025
- July 2025
- June 2025
- May 2025
- April 2025
- March 2025
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- December 2023
- November 2023
- October 2023
- September 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- October 2018
- September 2018
- August 2018
- July 2018