#ai #ai-alignment #ethic
[[Alignment]] wants that the machines do not harm us.
Th biggest sin a man can do has always been to kill another man, therefore it follows that a machine should be forbidden murder (of humans only, of course).
I suppose that several laws that are written in our religions would be interesting to be applied to machines, these laws are [[Philosophy/Rationality/Models/Lindy Effect|Lindy-proof]].
Technically, how can we incorporate fundamental rules in a machine? [[Aasimov]] suggested a kind of hard printing in the hardware, I hardly see how it would be possible.
In software, we might hard code an auto destruction when we detect a beginning of an action leading to breaking those rules?