Skip to content

August 22, 2015

RoboPsych Interview about the TV Show “Humans”

by omohundro

On September 17, 2015, the psychologist Tom Guarriello interviewed me for his “RoboPsych” podcast:

http://www.robopsych.com/robopsychpodcast/8182015

We talked about the newly-emerging psychology of humans interacting with robots and AI. And, *SPOILER WARNING*, we discussed the first season of the excellent recent AMC/BBC show “Humans”.

The flood of recent movies and TV shows exploring the impact of robots and AI. Early shows like Terminator and Robocop focused on “Us vs. Them”. More recent shows like “Her” and “Humans” explore subtler aspects of the interaction.

The archetype of the “Out of Control Creation”. The Sorcerer’s Apprentice. Stories of Genies giving three wishes but with unintended outcomes. King Midas. The “Uh-Oh!”. Even if you get what you think you want, it may not be what you really want. Adam and Eve as the first out of control creation story. We ourselves are out of control. Fear of the other is a projection of our own darker inner drives.

“Humans” takes place in the present but with a more advanced “Synth” android robot technology. Family dynamics with Synths. The little girl sees the synth as a mother figure. The mother is jealous of the synth. The teenage boy is sexually attracted to the synth. Synths as a memory prosthetic. Synths with consciousness. Synths with subpersonalities.

How close is today’s technology to anything like this? Economic drivers for building AIs that recognize human emotional facial and vocal expressions. Recent Microsoft AI to judge the humor of New Yorker cartoons. Artificial empathy. Jibo and Pepper. Things are moving extremely rapidly. McKinsey estimates $50 trillion dollars of impact in the next 10 years. Deep Learning is used for many functions. Baidu using it for Chinese. If understanding human emotions has economic value, it will soon be in the marketplace.

Humans are not good at determining how emotionally intelligent an entity is. Eliza was an early 1960’s AI system. It used simple pattern matching to mimic a Rogerian therapist. Yet people spent hours talking to it! Deep tendency for people to form attachments to objects. People naming their Roombas. Soldiers attached to their IED-detecting robots. Synths can behave more maturely than humans! Non-Violent Communication.

Synths in the service of marketing for a brand? “Hidden Persuaders” and sexuality in advertising. Brands adopt the “Jester” archetype when they ride on deep primal urges like sexuality. Japan and virtual girlfriends. Japan’s relationship to robots. Robots for elder-care. Belief in robot euthanasia hoax. The uncanny valley. Elder’s experience with robotic companions. Robot pets. Tamagotchi. Sony stopping Aibo robot dog support. Kids don’t learn how to handwrite anymore. Horse riding becoming less common. Future shock. Visions of the future from the past. Approach/Avoidance conflicts.

Creators of these systems want them to have intelligence and creativity but they also want to retain control of them. Are they alive, what rights do they have? Building in safeguards. How can we have confidence that these systems won’t run amok? In Humans, the Synths exhibit ambiguity about their own consciousness. Give the code for consciousness to a human for safekeeping. But Niska secretly keeps her own copy and may want to spread it in Season Two! The Synth’s experiences affect their behavior.

What happens when a system can change its own structure? What is the nature of goals and behavior? Unintended consequences. Basic Rational or AI Drives for self-preservation, resource acquisition, replication, efficiency. We need to be careful as we build systems with their own intentions. Deep mind system that adapts to play video games. When will systems start exhibiting unexpected behavior? Robot “Fail” videos. “Whistling past the graveyard?” When we see goofy behavior, it assuages our fear: “Nothing to see here. Move along.”

Robot soldiers, South Korean autonomous gun turret, drones, etc. “How can we be very sure that these systems are safe?” A conservative strategy: The “Safe-AI Scaffolding Strategy”. Regardless of how smart they are, these systems have to obey the laws of mathematics and physics. Create mathematical proofs of properties of behavior. But proofs are hard. Need AI systems to help us establish safety guarantees. Start with very constrained systems like biohazard labs. Err on the side of caution because we are toying with very powerful forces here.

Psychoanalytic aspects of the Beatrice Synth. Suicidal synths? Humanity vs. being human. Ending of the first season with an anti-synth “We are human” protest and the conscious synths escape by blending in with the humans.

Read more from Uncategorized

Leave a comment

Note: HTML is allowed. Your email address will never be published.

Subscribe to comments