James Barrat just wrote a powerful article for the Huffington Post:
And he explicitly supported our work in the article (thanks James!):
The crux of the problem is that we don’t know how to control superintelligent machines. Many assume they will be harmless or even grateful. But important research conducted by A.I. scientist Steve Omohundro indicates that they will develop basic drives. Whether their job is to mine asteroids, pick stocks or manage our critical infrastructure of energy and water, they’ll become self-protective and seek resources to better achieve their goals. They’ll fight us to survive, and they won’t want to be turned off. Omohundro’s research concludes that the drives of superintelligent machines will be on a collision course with our own, unless we design them very carefully. We are right to ask, as Stephen Hawking did, “So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right?”
Wrong. With few exceptions, they’re developing products, not exploring safety and ethics. In the next decade, artificial intelligence-enhanced products are projected to create trillions of dollars in economic value. Shouldn’t some fraction of that be invested in the ethics of autonomous machines, solving the A.I. control problem and ensuring mankind’s survival?