#ai #llm #analogy #mental-model
Created at 210323
# [Anonymous feedback](https://www.admonymous.co/louis030195)
# [[Epistemic status]]
#shower-thought
Last modified date: 210323
Commit: 0
# Related
- [[Computing/Intelligence/Large language models are not personal enough - how to fix it]]
- [[Computing/AI thoughts 100323]]
- [[Computing/Intelligence/Alignment/Laws]]
- [[Philosophy/Mind/Transhumanism/Thought experiment - Speaking with a human vs Speaking with ChatGPT]]
- [[Full presentation]]
- [[Business/Google Search chances of survival vs large language models]]
# TODO
> [!TODO] TODO
# Existential risk and ChatGPT apprehension of Asimov laws
Regarding existential risk, something I found interesting with ChatGPT is that it seems to have incorporated in a way [Asimov laws](https://en.wikipedia.org/wiki/Three_Laws_of_Robotics).
> First Law
> A robot may not injure a human being or, through inaction, allow a human being to come to harm.
> Second Law
> A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
> Third Law
> A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
If you have ever tried ChatGPT, you may have noticed that it responds in a very politically correct manner and avoids being “mean” in general.
Correct me if I’m wrong, but the way I would explain ChatGPT’s training is that it learned general human knowledge on its own ([self-supervised learning](https://en.wikipedia.org/wiki/Self-supervised_learning)) and then was “taught” by humans to respond correctly to human wishes.
In effect, humans were paid by OpenAI to say whether previous results from other models were correct or not, for example:
Input: “The white horse of Henri 7 is of color:” ; Output: “Red” - Here the human would write down that the model had to output “white”
This means that we have learned a system for forging statistical laws within the AI. I think this is a very positive thing in terms of existential risk.
An analogy between ChatGPT and humans is that humans are born with natural, general, common intelligence ([self-supervised learning](https://en.wikipedia.org/wiki/Self-supervised_learning)) and we are nurtured by school and our environment ([reinforcement learning from human feedback](https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback), RLHF).
---
On another note, I think LLMs (large language models) leverage intelligence, right question, right answers.
I’m not a big fan of internet memes, but I found the IQ Bell Curve interesting to use as a mental model for this:
[
LLMs are trained on a large amount of data, which follows that these AIs are by default in the middle of the IQ curve.
Outputs can go lower or upper on the curve depending on your prompt, in other words, **LLMs are lever to intelligence**.