#llm #ai
Created at 110423
# [Anonymous feedback](https://www.admonymous.co/louis030195)
# [[Epistemic status]]
#shower-thought
Last modified date: 110423
Commit: 0
# Related
- [[Computing/Analogy LLM context and human thinking]]
- [[Computing/Intelligence/LLMs are a lever rather than an engine]]
- [[Computing/Automatic RLHF]]
- [[Computing/The advantage of LLMs to understand human memetics]]
- [[Computing/Intelligence/Evaluating LLM]]
# Context locality in LLM
What if [[Large language model|LLM]] could take infinite size input? Is it even possible ?
How can it even predict or solve a problem given infinite input? You at least need a goal right?
Aren’t human being especially throwing useless messages information away, [[Philosophy/Rationality/Models/Noise|noise]], to keep only [[Signal]]s, what is aligned with your [[Personal growth/Goal|goal]]s?
[[Large language model|LLM]] will always need to be fed the right window of context and that’s why [[Embedding is the dark matter of intelligence]]
## the non local nature of the [[Quantum physic]]
[[Nicolas Gisin - Quantum Chance - Nonlocality - Teleportation and Other Quantum Marvels|Quantum Chance - Nonlocality - Teleportation and Other Quantum Marvels]]