#computing - Extending [[Francois Chollet]] ideas, universally, we are a very specialized [[Philosophy/Rationality/Intelligence]]/[[Computing/Intelligence]] that is very fit to our survival on this environment: earth. We therefore tend to mismeasure [[Philosophy/Rationality/Intelligence]]/[[Computing/Intelligence]] and very few work is done there -> implement a sort of Turing test but for AGI e.g. extending "On the Measure of Intelligence", humans try to measure [[Philosophy/Rationality/Intelligence]]/[[Computing/Intelligence]] with "IQ tests" but it's very narrow [[Philosophy/Rationality/Intelligence]]/[[Computing/Intelligence]] ? - (kinda follow-up of previous point) AGI "monitoring" how can we observe, monitor an AI policy ? Imagine a "Prometheus" but for AI policies ? (hardly apply to narrow AI ? Or maybe it does, e.g. vision biases ... ?) - I think humans are interested in enhancing themselves using AI therefore the AI must necessarily understand human nurture / culture / behavior / model of the world.