#ai #epistemology
# [[Epistemic status]]
#shower-thought
# Related
# Hallucination
#to-digest
>Neural language generation approaches are known to hallucinate content, resulting in generated text that conveys information that did not appear in the input. Factual inconsistency resulting from model hallucinations can occur at either the entity or the relation level
>[^1]
>other kind of hallucinations that are more difficult to spot: relational inconsistencies, where the entities exist in the source document, but the relations between these entities are absent
>[^1]
What is hallucination in [[Artificial intelligence|AI]], formally? In [[AI generated information is not necessarily evil]] we explore the [[Morality|ethic]]al implication of [[Information|information]] that leads to false [[Belief|belief]] about [[Objective reality|objective reality]], that is, [[The Map is not the Territory|a map that does not reflect the territory]].
So it seems that many researchers try to solve hallucination without trying to know what it is? Or merely based on a [[Metaphysical|metaphysical]] [[Philosophy/Epistemology/Knowledge|knowledge]] of it?
# External links
[^1]: https://www.amazon.science/latest-news/3-questions-with-kathleen-mckeown-controlling-model-hallucinations-in-natural-language-generation#:~:text=Neural%20language%20generation%20approaches%20are,entity%20or%20the%20relation%20level.