#ai
#computing
---
aliases: [Machine Learning]
---
In artificial life, we only implement nurture, and that's why the artificial intelligence that we build is far from human [[Philosophy/Rationality/Intelligence]].
Our artificial learning systems use only narrow and biased task related data, how can we expect it to reach an [[Philosophy/Rationality/Intelligence]] that evolved in very distributed, volatile, noisy environment ?
Learning uses connectionism.
Implementing in machines what human know from birth, **nature**, is hard (build a pyramid using blocks) while implementing in machines what humans learn during life (and find it hard), **nurture**, is easy (playing chess ...).
I expect one think there is bottlenecks to this theory such as "driving a car", this task is nature: we use most of our senses (vision, audio ...) ... + nurture: moving the steering wheel ...
It's easy to implement a [basic self driving car](https://github.com/louis030195/LearnToDrive) but hard to implement one that NEVER kill anyone (100%).