#ai #epistemology #ai-alignment # [[Epistemic status]] #shower-thought # Related - [[Unfriendly AI]] - [[Pessimistic vs optimistic AI]] - [[Science]] - [[Humanity's knowledge is obsolete]] - [[GPT3]] # Eliciting Latent Knowledge #to-digest >The core difficulty we discuss is learning how to map between an AI’s model of the world and a human’s model. Often [[Artificial intelligence|AI]] is still pretty much based on human [[Philosophy/Rationality/Intelligence|organic intelligence]] but I can see that [[Artificial intelligence|AI]] diverges when trained into narrow mechanical tasks that human do not intuitively deal with. # External links - https://www.lesswrong.com/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge - https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit Similar topic links: [[Eliciting Latent Knowledge]] [[Unfriendly AI]] [[Science]] [[Pessimistic vs optimistic AI]] [[Current bottlenecks of artificial intelligence]]