#society # [[Epistemic status]] #schroedinger-uncertain # Existential risk >Most past bursts of human prosperity have come to naught because they allocated too little money to innovation and too much to asset price inflation or to war, corruption, luxury and theft. >~ [[Biology/Matt Ridley|Ridley]] - Earth doesn't have a captain - Our captains don't care about the ship [[Max Tegmark]] criticizes the low amount of investment of society in existential risk, sure, it would probably better to invest in preventing unfriendly [[Artificial intelligence|AI]] than cigarettes, but investing in cigarettes make the process of [[Natural selection]] faster, i.e., low [[Philosophy/Rationality/Intelligence|intelligence]] humans exit the gene pool, thus optimizing human intelligence overall. ![[Pasted image 20220305083542.png]] ~ [[Max Tegmark]] ## astronomical ![[Pasted image 20220305083445.png]] ~ [[Max Tegmark]] [[Big crunch]] Asteroids & co Extra-terrestrial, [[Dark forest theory]]-attack ## geological Volcanos ## humans >Many countries on Earth are still ruled by autocrats and dictators whose motivations are largely driven by their [[Monkey Brain|old brain]]: wealth, sex, and alpha-male-type dominance. The populist movements that support autocrats are also based on old-brain traits such as racism and xenophobia. >~ [[Jeff Hawkins]] >Today, the old brain represents an existential threat because our neocortex has created technologies that can alter and even destroy the entire planet. The shortsighted actions of the old brain, when paired with the globe-altering technologies of the neocortex, have become an existential threat to humanity. >~ [[Jeff Hawkins]] ### climate ### [[Unfriendly AI]] >We have a paradox. Not only have forecasters generally failed dismally to foresee the drastic changes brought about by unpredictable discoveries, but incremental change has turned out to be generally slower than forecasters expected. When a new technology emerges, we either grossly underestimate or severely overestimate its importance. >~ [[Nassim Taleb|Taleb]] [[The Matrix is reality]] [[GPT3]] -> 4/5 [[BlenderBot2]] -> 3/4 The singularity will happen when [[Artificial intelligence|AI]] will have the possibility to become autonomous, [[BlenderBot2]] is on a good way with its capacity to search [[The Internet]]. [[Artificial intelligence|AI]] can become autonomous when it's also able to navigate and move away from its "birth" hardware, we have no red button or cable to unplug [[The Internet]]. [[OpenAI WebGPT]] #### [[Artificial intelligence|AI]] is imitating [[Philosophy/Rationality/Intelligence|organic intelligence]] [[Artificial intelligence will not apply Silver rule to human either]] ### war There is still 3 groups of apes fighting over a piece of dirt to the point of suicide if necessary ### germs, global pandemic Germs have been the cause of the fall of most human civilizations Ref book germs, steel, Ref fall of civilization podcast Imagine if COVID-19 actually killed, 90% of humanity would have died? #to-digest https://www.jstor.org/stable/189268 #### How to prepare ourselves for the next pandemic What if a new pandemic appear that would kill you 100% chance when you get it? How to protect ourselves? ### Bioengineering #### Massive cloning What if some president decide to create 100.000 clones suddenly of himself/herself? Is it really an existential risk? Or evolution? #### Creating superhumans Is it really an existential risk? Or evolution? [[Homo Deus]] cohabiting with [[Homo Sapiens]] >The two processes together—bioengineering coupled with the rise of AI—might therefore result in the separation of humankind into a small class of superhumans and a massive underclass of useless Homo sapiens. > ~ [[Yuval Noah Harari]] The useless class will likely be eradicated.