AI Alignment - Wikipedia - superintelligent AI

## Metadata
- Author: **superintelligent AI**
- Full Title: AI Alignment - Wikipedia
- Category: #articles
- URL: https://en.wikipedia.org/wiki/AI_alignment
## Highlights
- In artificial intelligence (AI) and philosophy, AI alignment and the AI control problem are aspects of how to build AI systems such that they will aid rather than harm their creators. One particular concern is that humanity will have to solve the control problem before a superintelligent AI system is created, as a poorly designed superintelligence might rationally decide to seize control over its environment and refuse to permit its creators to modify it after launch.[1] In addition, some scholars argue that solutions to the control problem, alongside other advances in AI safety engineering,[2] might also find applications in existing non-superintelligent AI