#humans #education #ai #llm
Created at 2022-12-20
# [Anonymous feedback](https://www.admonymous.co/louis030195)
# [[Epistemic status]]
#shower-thought #to-digest
# Related
- [[Education]]
- [[Philosophy/Rationality/Learning]]
# TODO
> [!TODO] TODO
# The bottleneck of AI summaries
Let's say I want to learn about [[Roko basilisk]], I ask [[GPT3]]:
> Roko's Basilisk is a thought experiment that proposes a hypothetical future in which a powerful artificial intelligence punishes those who did not help bring about its existence. The idea was first proposed by the philosopher and computer scientist, Roko, in 2010. The concept of Roko's Basilisk has been widely discussed in online forums, blogs, and research papers. It raises ethical questions regarding the development of artificial intelligence and the potential implications of a superintelligent AI. The idea of a powerful AI coming to punish those who did not help bring it into existence has been met with both fear and criticism from the AI research community. However, it has also sparked discussion about the potential implications of advanced artificial intelligence.
![japanese style dragon in the style of peter mohrbacher, wayne barlowe, gustav moreau, bussiere, roberto ferri, santiago caruso, luis ricardo falero, austin osman spare, ](https://lexica-serve-encoded-images.sharif.workers.dev/md/06b80c34-e2ca-49d7-a31f-6b889f49dcc0)
My biggest [[Skepticism|skepticism]] with [[Artificial intelligence|AI]] summaries is that it seems humans have a better learning through [[Story|stories]], or "storytelling", a move into better summaries would be through an [[Artificial intelligence|AI]] that [[Mapping maps|understand MY model of the world]], unlike [[GPT3]] which is [[The averageness of The Internet|The Average of The Internet]].
![colorful green scene from The Matrix (1999) by Markus Vogt exquisitely detailed, 4k ultra](https://lexica-serve-encoded-images.sharif.workers.dev/md/01db8b1c-acdd-465e-84ee-1aaf87de918d)
So it would be a kind of [[Large language model]] that is trained to translate humanity's [[Information|information]] into an information optimally understandable by me (i.e. low [[Kolmogorov complexity]]).
I'm not even sure it is enough. It seems storytelling allows you to navigate towards an island where waves connect each and every concept in your [[Mind|mind]] until reaching the final island, the Eureka moment of [[Why AI does not understand|understanding]].