#llm #memory
Created at 091223
# [Anonymous feedback](https://www.admonymous.co/louis030195)
# [[Epistemic status]]
#shower-thought
Last modified date: 091223
Commit: 0
# Related
# Different kinds of LLM memories
In the current LLM space, databases are used to store cold data, i.e. are the equivalent of the hippocampus and [[Neocortex|neocortex]].
Queues and caches are used to store hot data, they are the equivalent of the prefrontal cortex.
Storages are rather an interface for humans, connected to the cold data.
And then there is the weird "newborn": vector databases. Even though it's been there for many years, it has been rediscovered when OpenAI innovated on embeddings models by making them easier to use, faster, and cheaper.
The problem with vector db is that LLMs cannot usually directly read embeddings, except if custom low level deep learning has been developed for it (because usually embeddings are generate by a different models than the one generating text/image. It's like you need a translation between different brains)
Let's be honest. Embeddings are a pain in the ass to manage.
And the new startups raising xx millions does not help.
We need a new kind of AI that can interpret directly these vectors, then, we can rely much more on vector dbs and much less in sql/nosql.
Basically we would store AI's memories as vectors and it would be able to reconstruct the meaning while interacting with humans.