Eliezer S. Yudkowsky wrote about an experiment which had to do with Artificial Intelligence. In a near future, man will have given birth to machines that are able to rewrite their codes, to improve themselves, and, why not, to dispense with them. This idea sounded a little bit distant to some critic voices, so an experiment was to be done: keep the AI sealed in a box from which it could not get out except by one mean: convincing a human guardian to let it out.
What if, as Yudkowsky states, ‘Humans are not secure’? Could we chess match our best creation to grant our own survival? Would man be humble enough to accept he was superseded, to look for primitive ways to find himself back, to cure himself from a disease that’s on his own genes? How to capture a force we voluntarily set free? What if mankind worst enemy were humans?
In a near future, we will cease to be the dominant race.
In a near future, we will learn to fear what is to come.
Oxygen doesn't grow on trees.
Posts tagged ‘Yudkowsky’
3D AI Animation Artificial Intelligence Atheism Autonomous Cars Big Data Bitcoin Blog Book Brad Templeton Bruce Schneier Cats Crypto Death Douglas Adams Driverless Car Dubstep Entrepreneur Family Footer Future God Google Google Analytics Google driverless car Google Glass Hebrew How to Create a Mind Identity Unification Instagram Israel Kickstarter Language Me Meme Memory Mnemonic Money MongoDB Music Niv Singer Oculus Oculus Rift Open Source Pi Political Correctness Quote Ray Kurzweil Recursion Religion Rift Robot Robots Science Security Self-reference Social Intelligence Social Media Startup Suicide Bomber TED Tel Aviv Terror The Singularity Is Near Thru-You Time tracx Twitter Unix Time Video Virus WordPress WTF Yudkowsky
Join 36 other subscribers