3 Questions With Kathleen McKeown: Controlling Model Hallucinations in Natural Language Generation - Amazon Science - Applied Scientist ![rw-book-cover](https://readwise-assets.s3.amazonaws.com/static/images/article1.be68295a7e40.png) ## Metadata - Author: [[Applied Scientist]] - Full Title: 3 Questions With Kathleen McKeown: Controlling Model Hallucinations in Natural Language Generation - Amazon Science - Category: #articles - URL: https://www.amazon.science/latest-news/3-questions-with-kathleen-mckeown-controlling-model-hallucinations-in-natural-language-generation#:~:text=Neural%20language%20generation%20approaches%20are,entity%20or%20the%20relation%20level. ## Highlights - Neural language generation approaches are known to hallucinate content, resulting in generated text that conveys information that did not appear in the input. Factual inconsistency resulting from model hallucinations can occur at either the entity or the relation level - other kind of hallucinations that are more difficult to spot: relational inconsistencies, where the entities exist in the source document, but the relations between these entities are absent