#ai #llm Created at 020323 # [Anonymous feedback](https://www.admonymous.co/louis030195) # [[Epistemic status]] #shower-thought #non-biological Last modified date: 020323 Commit: 0 # Related - [[Computing/Embeddings]] - [[Computing/Intelligence/Machine Learning/Embedding is the dark matter of intelligence]] - [[Computing/Intelligence/Alignment/Cognitive bias learned by AI]] - [[Computing/Intelligence/Joint embedding]] - [[Computing/Hierarchichal semantic resolution]] # TODO > [!TODO] TODO # Embeddings in the human mind Imagine that you are learning a new language. At first, you might struggle to remember individual vocabulary words and try to memorize them in isolation. However, as you become more proficient, you start to notice patterns in how words are used together and how they relate to each other. You begin to develop a more nuanced understanding of the language, based on the relationships between words and concepts rather than just their isolated definitions. In a similar way, AI embeddings are designed to learn patterns of relationships between words based on their co-occurrence in language data. The algorithm analyzes large amounts of text and maps each word onto a high-dimensional vector space, where words that are used in similar contexts are located closer together. This can be thought of as a kind of "mental map" that encodes information about the relationships between words based on their usage patterns. Of course, this is just an analogy, and there are many differences between how humans process information and how AI algorithms work. Nevertheless, it can help to illustrate some of the complex processes that underlie language comprehension and how they might be reflected in machine learning algorithms.