Medium: 5 Transcripts of Older AI Talks
A huge “Thank You!” to @Jeriaska for transcribing and publishing these transcripts to Medium:
“The Nature of Self-Improving Artificial Intelligence”, October 27, 2007
“Self-Improving AI: The Future of Computation”, November 1, 2007
“Self-Improving AI: Social Consequences”, November 1, 2007
TEDX Talk: What’s Happening With Artificial Intelligence?
The TED conference, started in 1984, has become the standard bearer for hosting insightful talks on a variety of important subjects. They have made videos of over 1,900 of these talks freely available online and they have been watched more than a billion times! In 2009 they extended the concept to “TEDx Talks” in the same format but hosted by independent organizations all over the world.
On January 6, 2016 Mountain View High School hosted a TEDx event on the theme of “Next Generation: What Will It Look Like?”. They invited both students from the school and external speakers to present. I spoke on “What’s Happening With Artificial Intelligence?”. A video of the talk is available here:
and the slides are available here:
TEDX – What’s Happening With Artificial Intelligence ?
I talked about the multi-billion dollar investments in AI and robotics being made by all the top technology companies and the 50 trillion dollars of value they are expected to create over the next 10 years. The human brain has 86 billion neurons wired up according to the “connectome”. In 1957 Frank Rosenblatt created a teachable artificial neuron called a “Perceptron”. Three-layer networks of artificial neurons were common in 1986 and much more complex “Deep Learning Neural Networks” were being studied by 2007. These networks started winning a variety of AI competitions besting other approaches and often beating human performance. These systems are starting to have a big effect on robot manufacturing, self-driving cars, drones, and other emerging technologies. Deep learning systems which create images, music, and sentences are rapidly becoming more common. There are safety issues but several institutes are now working to address the problems. There are many sources of excellent free resources for learning and the future looks very bright!
Eileen Clegg did wonderful real time visual representations of the talks as they were being given. Here is her drawing of my talk:
Edge Essay: Deep Learning, Semantics, and Society
Each year Edge, the online “Reality Club”, asks a number of thinkers a question and they publish the short essay answers. This year the question was “What do you consider the most interesting recent scientific news? What makes it important?” The responses are here:
My own essay on “Deep Learning, Semantics, And Society” is here:
http://edge.org/response-detail/26689
The Basic AI Drives in One Sentence
“If your goal is to play good chess, and being turned off means that you play no chess, then you should try to keep yourself from being turned off.”
So chess robots should be self-protective. The logic is not complicated. My 9 year old nephew has no trouble understanding it and explaining it to his friends. And yet very smart people continue to argue against this idea vociferously. When I first started speaking and writing about this issue, I expected people to respond with “Oh my goodness, yes, that’s an issue, let’s figure out how to design safe systems that deal with that appropriately.”
But you can see videos of some of my older lectures where audience members stand up, red in the face, screaming things like “economics doesn’t describe intelligence”. Others have argued that it is due to an insufficiently “feminine” view of intelligence. Others say that this is “anthropomorphizing” or “only applies to evolved systems” or “only applies to systems built on logic” or “only applies to emotional systems”. Hundreds of vitriolic posts on discussion forums have argued with this simple insight. And there have even been arguments that autonomy is just a “myth”.
Here are one-sentence versions of some of the other drives:
“If your goal is to play good chess and having more resources helps you play better chess, then you should try to get more resources.”
“If your goal is to play good chess and changing that goal means you will play less chess, then you should resist changing that goal.”
“If your goal is to play good chess and you can play more chess by making copies of yourself, then you should try to make copies of yourself.”
“If your goal is to play good chess and you can play better if you improve your algorithms, then you should try to improve your algorithms.”
In a widely-read paper, I called these the “Basic AI Drives”. But they apply to any system which is trying to accomplish something including biological minds, committees, companies, insect hives, bacteria, etc. It’s especially easy to see why evolution rewards reproduction but these drives are not restricted to systems that have evolved.
I used the goal of “play good chess” because it’s simple to understand and seems harmless. But the same logic applies to almost any simple goal. In the papers, I describe some “perverse” goals (like the goal of “turning yourself off”) where they don’t apply but these aren’t relevant for most real systems.
Does this mean that we shouldn’t build autonomous systems? Of course not! It just means that creating intelligence is only part of the problem. We also need to create goals which are aligned with human values. Is that impossibly difficult? I’ve seen no evidence that it’s extremely hard, but simple proposals tend to create complex incentives which lead to the same kinds of behavior in a more complicated form.
What we really need is a rigorous science of the behavior of goal-driven systems and an engineering discipline for the design of safe goals. In a recent paper, Tsvi Benson-Tilsen and Nate Soares analyzed a formal model of these phenomena which I think is a great start!