#computing #ai #ai-alignment
# [[Epistemic status]]
#shower-thought
# Related
- [[Internet niches]]
- [[Unfriendly AI]]
- [[Safety]]
- [[en.wikipedia.org - AI Alignment - Wikipedia]]
- [[Availability bias is strong in AI alignment research]]
# Alignment
AI Alignment is an important research area within the field of artificial intelligence (AI). It involves the development of techniques to ensure that autonomous AI systems do what humans want them to do, and do not do anything that would be considered unethical or otherwise undesirable. AI Alignment involves the development of techniques to better understand and control the behavior of AI systems, as well as the development of methods to assess the ethical and moral implications of AI systems. AI Alignment is a critical research area as AI systems are becoming increasingly powerful and autonomous, and the potential for misuse or misalignment of AI systems is growing.
Usually about how [[Artificial intelligence|AI]] [[Personal growth/Goal|goal]]s follow humanity's / its creator [[Personal growth/Goal|goal]].
Humanity's [[Personal growth/Goal|goal]] is yet unknown. A good hint is increase in [[Will to power]], global [[Wealth|wealth]], global [[Health|health]], and of course, free [[Philosophy/Rationality/Time|time]].
>The real problem with robots is not their own artificial intelligence but rather the natural stupidity and cruelty of their human masters.
>~ [[Yuval Noah Harari]]
![[Pasted image 20220610204701.png]]
[^1]
# Links
<iframe width="560" height="315" src="https://www.youtube.com/embed/0uh04QqxahU" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
[^1]: https://twitter.com/MichaelTrazzi/status/1534927524630863873
https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities
https://youtu.be/GxZp6890hQk?t=1514