AI and the Future of Human Morality
On March 19, 2008 Steve Omohundro gave a talk at the meeting of the World Transhumanist Association (now Humanity+) on “AI and the Future of Human Morality”. Great thanks to Drew Reynolds who filmed the talk, edited the video, and produced a transcript with the original slides. The video is available here:
http://video.google.com/videoplay?docid=1286459692350772178&hl=en
The edited transcript and slides are below and also at:
http://www.acceleratingfuture.com/people-blog/?p=2102
The following transcript of Steve Omohundro’s presentation for the World Transhumanist Association Meetup has been revised for clarity and approved by the author.
AI and the Future of Human Morality
This talk is about “AI and the Future of Human Morality.” Morality is a topic that humanity has been concerned with for millennia. It is considered a field of philosophy, but it also provides the basis for our political and economic systems. A huge amount has been written about morality but transhumanism, AI and other emerging technologies are likely to up the stakes dramatically. A lot of political discourse in the United States today is concerned with abortion, stem cell research, steroids, euthanasia, organ transplants, etc. Each of those issues will arise in much more complex versions due to advanced new technologies. The fact that we have not yet resolved today’s simple versions means that there will likely be very heated discussions over the next few decades.
Something that worries me is a disturbing and potentially dangerous trend among some futurists. Three weeks ago I was at a conference in Memphis called AGI-08 which was a group of about 130 scientists who are interested in building general-purpose AIs that are not specialized for a particular kind of task. Hugo de Garis was one of the speakers at the conference, and he polled the audience, asking: “If it were determined that the development of an artificial general intelligence would have a high likelihood of causing the extinction of the human race, how many of you feel that we should still proceed full speed ahead?” I looked around, expecting no one to raise their hand, and was shocked that half of the audience raised their hands. This says to me that we need a much greater awareness of morality among AI researchers.
The twentieth century gave us many examples of philosophies which put ideas ahead of people, with horrendous results. For example, Nazism, Maoism, Stalinism and the Rwanda genocide respectively led to the deaths of 11 million, 20 million, 20-60 million, and 1 million people.
Here’s a beautiful visual illusion that is a good metaphor for thinking about morality. About half of the population sees the dancer going clockwise and the other half sees her going counter-clockwise. It is remarkably challenging to switch your perception to the other direction. Many illusions are easy to flip, but this one is particularly hard.
When thinking about morality, there are at least two perspectives one may adopt, and it is sometimes very difficult to flip to the other perspective. We may call these two perspectives the “inside” or “subjective” view and the “outside” or “objective” view. The same two perspectives arise in many other disciplines. For example, in physics the “outside” view of space and time is as a single space-time manifold. There is no sense of “now” and no notion of time “moving”. The whole of time exists all at once in a single construct. The “inside” view is that perceived by an intelligent entity, such as us, living in this structure. We very much have a sense of “now” and a sense of the “flow of time”.
When thinking about morality, the “internal” view comes from the perspective of personal experience. We have a personal sense of what is right and wrong. Our inner sense is shaped by our childhood experience with the mores of the social and religious systems we grew up in.
The “external” view tries to step outside of our individual experience and create an objective model. Philosophers and theologians have identified critical moral distinctions and concepts over thousands of years. Evolutionary psychology is the most recent attempt to create an external perspective that explains our internal experience. Economics and legal theory also try to create formal theoretical bases for moral reasoning.
I believe that we need both views, but because we are human, I think the internal one is the one we should consider primary when we think about positive futures. The external view is very important in understanding how we got those perspectives, but I think it is a potentially dangerous mistake to identify ourselves with the external view.
The basic understanding of morality that most psychologists have today builds on the work of Kohlberg from 1971, where he studied the stages of moral development in children and discovered six basic stages, as well as some evidence for a seventh. The stages also seem to apply to cultures.
The stages start with a very egoic sense of self and work up to a much broader sense of self. His methodology in determining a person’s moral stage would be to tell them a story:
A man’s wife is sick and she needs a special medicine. The pharmacist has developed this medicine and will sell it for $10,000 but the man only has $1,000. He pleads with the pharmacist, but the pharmacist says, “No. I developed it and can charge whatever I want to charge.” So in the middle of the night, the man breaks into the pharmacy and steals the medicine to save his wife.
The question is whether this is a moral action. Kohlberg was not actually concerned with whether people think it is moral or not, but rather with their explanations for whatever stance they took. People in the early stages of development might say that the act was wrong because by breaking in, he could be arrested and go to jail. Going to jail is painful and that is not a good thing. People at the later stages might argue that saving his wife’s life trumps all other rules and laws, so he is justified in stealing to saver her. A middle stage might argue that obeying the law against breaking into buildings is what every good citizen should do, and if his wife has to pass away because of it, that is what is needed to be a citizen of a society with the rule of law.
He interviewed people from many different cultures and children at different ages, and there tends to be a general progression through the six stages. The possible seventh stage is a kind of transcendent identification with something larger. Many people today identify not just with themselves, their family, local community, group, race or species, but are starting to identify with other animals and perhaps with all other sentient beings in the universe. Buddhism says, “May all sentient beings be happy.” There is an expansion of the sense of connection and responsibility.
If we look at humanity as a whole, we are a really interesting mix of incredible altruism and horrendous evil behavior. We can exhibit much more altruism than any other species, especially when you consider altruism toward other species, and that has been a major component of our success. It is the fact that we are able to cooperate together that has enabled us to build the technologies that we have. At the same time, we have committed more horrendous genocide and caused more extinctions than any other species.
If you look at recent history, however, there is a trend toward great moral progress. 200 years ago, slavery was generally accepted. Now, it is viewed as immoral almost everywhere in the world, at least officially, and pressure is put on societies that still allow it. The same is true of torture, though there has been a lot of recent controversy about it. We have the Geneva Convention and the notion of war crimes, the sense that war is bad but there are things within war that are especially bad. We have the establishment of women’s rights in many countries, though some are still lagging. The same is true of racial equality. And the animal rights movement is growing rapidly.
The book “Blessed Unrest” by Paul Hawken describes a recent huge upsurge in ecological movements, movements toward sustainability, groups aimed at bringing more consciousness into business, movements aimed at truly making people happy (as opposed to pure monetary gain). The country of Bhutan doesn’t measure “Gross National Product”. Instead, it measures “Gross National Happiness”. Paul Hawken has an interesting video on YouTube titled “How the largest movement in the world came into being and why no one saw it coming.” In this YouTube video, he describes there are literally hundreds of thousands of organizations moving in a similar positive direction, which are springing up totally independent of one another. There is no leader, no coherent form to it. The global warming issue is catalyzing a lot of people. It really feels like a time in which we are undergoing a pretty major shift in morality.
Partly I am sure it is due to the internet. You can see its effect in what recently happened in Myanmar, which used to be Burma, where they have a very strong totalitarian regime. The government brutally attacked a group of monks. Someone used their cell phone camera to record the event. The images of that brutality were broadcast around the internet within days, and huge pressure was put on that government. The forces of observation, pushing toward more accountability, are growing over time.
At the same time, we are extremely vulnerable. There is a powerful new book by Philip Zimbardo called The Lucifer Effect. He was the professor of psychology at Stanford who in the early 1970s did the now classic Stanford prison experiment with ordinary Stanford undergrads—smart, happy, generally well adjusted students. He randomly assigned them roles of prison guards and prisoners. He himself played the role of the prison warden. The intention was for it to run for a couple of weeks, but after a couple of days the guards started acting sadistically, even to the point of sexual abuse of the prisoners. The prisoners started showing the signs of mental breakdown and depression. He as the warden found himself worried about insurrection and encouraged the guards to treat the prisoners even more harshly.
Zimbardo’s girlfriend showed up after five days and said, “What is going on here? This is abuse.” He kind of woke up from the experiment and came back to his role of Stanford professor and stopped the experiment. The experiment was shocking to people because it showed how, given the right circumstances, normal and well-adjusted people can quickly turn evil. The most recent example of that phenomenon has been the Abu-Grahib prison tortures. Zimbardo served as a consultant in the inquiry into what happened there. He said that the circumstances that the US government created were ideal for creating behavior that was amoral. I think the lesson to take from that is that humanity can be wonderfully altruistic and create incredibly powerful positive moral structures, but in the wrong circumstances we all also have a dark side within us. So we need to be very careful about the kind of structures we create.
When we think about transhumanism, I think we should start from humanitarianism. That is the notion that the things that most humans view today as precious, like human life, love, happiness, creativity, inspiration, self-realization, peace, animals, nature, joy, children, art, sexuality, poetry, sharing, caring, growth, contribution, spirituality, family, community, relationships, expression, are truly precious. These things matter because they matter to us. We may not know why these things matter to us, but that does not take away from the fact that they matter to us.
I think that the kind of morality and moral structures we want to create using new technologies should serve to preserve these qualities. During the founding of this country the Bill of Rights was created to identify the individual rights our new country was trying to protect. The Constitution instituted mechanisms such as the separation of powers, as a mechanism to preserve those rights. I think we are in an analogous situation now in which we want to identify what is really precious to us and then figure out ways to channel new technologies to support those things.
To start on this quest, the first question we need to consider is “What is a human?” Historically, the answer seems obvious, but emerging technologies like biotechnology and nanotechnology will make it much more challenging.
I thought I would throw out a few recent discoveries that shake up our notion of what it is to be human. The first thing you might think of when thinking about your own body is your atoms. That is a materialist view of the human. In fact, 98% of your atoms change every year. You are continually getting new atoms from the food you eat and are continually sloughing off old atoms. I have heard that the lenses in our eyes have the only atoms that are with us our whole lives. Everything else is in a state of flux.
My PhD was in physics. There are questions that every young physics grad student gets challenged with called “Fermi questions”. These are questions about things that you seemingly don’t have enough information to answer. For example: “How far can a duck fly?” or “How many piano tuners are there in Chicago?” You are supposed to estimate the answer using your physics knowledge and common sense. One of the classic questions is, what is the chance that your next breath contains at least one atom that was in Caesar’s last breath? When you work it all out, it turns out that it is actually quite likely that on average there are one or two atoms from the last breath of anyone who lived at least ten years ago in your next breath. Your nose contains some atoms from Caesar’s nose. That realization warps the view that this matter that makes up me is me. Really, we are much more interconnected, even at the purely material level. In one sense we are like ripples on a river of atoms that flows through us. We are structure, rather than the underlying material.
As the next level up from atoms, we might consider cells. “The atoms might go through us, but the cells are who we are.” Craig Venter gave a really interesting talk and found that 90% of our cells are not human cells, but microbes. In terms of number, we are nine times as much microbes as we are human. There are a thousand species of bacteria in our mouths, a thousand in our guts, 500 on our skin, another 500 in the vagina. We are incredible ecosystems. Another shakeup of our conception of what a human is.
How about our history? Clearly there were people around hundreds of thousands of years ago who developed cultures and so on. We must have continuity with them. Perhaps we can understand ourselves through that continuity. Well, there too, genetics is shaking up our picture of how human evolution occurred. It used to be thought that human evolution was very slow.
The most recent discoveries by John Hawks and others show that change in the past few thousand years has been incredibly rapid. People from only 5000 years ago had a genetic makeup that was closer to Neanderthals than to us. We are in a period of rapid change. Transhumanism is going to be even more rapid, but really, we are already in the midst of major change. For instance, 10,000 years ago no one had blue eyes. I could not have existed 10,000 years ago.
What about our mental structure—our sense of self? In many ways our identity and our morality come from our memories. Perhaps what our true identity is is our memories. If you replicate our memories, that is really our sense of self. Much recent research is showing that our memories are much more dynamic than people used to think. In particular, much of our remembered experience is a reconstruction, filling in pieces that we did not actually experience.
Recent experiments reveal that we actually remember the last time we remembered a fact, rather than the original experience. This leads to the notorious unreliability of eyewitness accounts. Eyewitnesses to a crime, especially if they read news stories about it, have memories that will be more about what they read about in the newspaper than what they actually saw. Our sense of experience and how the past affects the present is much more malleable than we commonly believe.
What about our psyches? Surely we have a unitary sense of self. “This is me — I am one person.” Well, recent psychological experments are really shattering that notion. There are several splits. Perhaps the biggest split is between the conscious mind and the unconscious mind. The psychologist Jonathan Haidt has a very interesting metaphor for the psyche as a rider on an elephant. By far, the bulk of our thinking and mind is unconscious, which he symbolizes as the elephant. Our conscious mind is the little rider on the top. Much of the time when we feel like we are making a decision, that our conscious mind is choosing between things, the decision has already been made. The conscious mind is mostly figuring out an explanation for why that was the right decision. That is a disturbing readjustment of our notion of self.
When you think about personal growth or personal change, Haidt says all sorts of things about how the elephant has different rules from our conscious minds. There is another psychic split between left brain and right brain. There are patients who have had their corpus collosum severed between the two halves. Both halves have language, both halves have the ability to think, but they specialize in different things. It gives rise to a strange picture of the self. Both beings are in some sense there together, not really aware of the fact that they are separate.
They do experiments on split brain patients where one side is shown something and acts based on what it sees. If the other side is then asked questions about it, it will fill in details that it does not have access to. It will make up stories about why a person did something. Finally, there have been many experiments showing that our psyches are made up of many parts with differing intentions and differing goals. Different parts come to the fore and take over control of the body at different times. It is most interesting that our internal perception of ourselves is quite different from the reality.
In order to make moral decisions about the future, it is valuable to try to see where our morality came from. Our universe began with the big bang about 14 billion years ago, according to our best current theories. The laws of physics as we experience them directly give rise to competition. They have a number of conserved quantities that can only be used for one thing at a time. Space, time, matter and energy which can be used in a form to do useful work, each of these can be split amongst different purposes, but there is only a limited amount of each of them. They are limited resources. If you apply a resource to a certain use, it cannot be used for something else.
This gives rise to a fundamental competitiveness in the structure of the equations of physics. If a creature wants to do something and another creature wants to do something different, they are in competition for the use of those resources. The most basic ingredient in the evolution of life is this battle to survive.
At the same time, the universe is structured so that things can often be done more efficiently by cooperating. If entities have goals which are somewhat aligned with one another, they can often gain more than they lose by working together. There is therefore also a pressure towards cooperation. Biology has an intricate interplay between these two pressures towardf cooperation and competition. The same interplay shows up in business and in economics in general.
The game theory literature uses the term “co-opetition” to describe this complex interplay. One company creates a product that another company uses in their manufacturing. Both are on the same supply chain and so they cooperate in the production of this product. But they have to decide how to split the profit between them. Each company wants them to work together to produce more and better products, but each would like the majority of of the profits for itself. There is a very complex network of both cooperative and competitive relationships between and within companies.
The same thing occurs at many levels in the biological world. Consider insects and plants—insects eat plants, so they are in competition there. However, they also help plants fertilize each other, and the plants provide nectar for the insects. They cooperate in that way. You can get the emergence of cooperative ventures arising out of what were seemingly competitive interactions to begin with.
John Maynard Smith, one of the most brilliant biological theoreticians wrote a beautiful book with Szathmary analyzing the basic steps in the evolution of life. They found that there were eight critical transitions that occurred. Each of these eight involves what used to be separate entities coming together to form a cooperative entity which was able to do something better. Originally we started as individual molecules, which came together cooperatively in enclosed compartments like cells.
The most striking cooperative transition was the creation of multicellular organisms. They used to be individuals cells, which came together and started working together. Even today there are organisms like slime molds which in part of their life cycle are separate individual cells doing their own thing and competing with each other. When food supplies dry up, they come together and form a sluglike creature which moves as a single organism. They are halfway between a multicellular organism and a group of individual cells.
Interestingly, at each of the eight transitions in life, there is still an incentive for the individuals that make up a collective to cheat their partners. In the case of multicellular organisms, if an individual cell reproduces itself more than it should for the good of the organism, we call it a cancer. In order for collective organisms to survive, they have to suppress the tendency of individuals to act in their own interests at the expense of the collective. Every one of the transitions in the development of life had to develop complex mechanisms to keep the competitive aspects of their components in check in order to get the cooperative benefits.
There are cases like parasites which are purely competitive, taking resources with no benefit to the host. Often though, when that kind of relationship occurs, they eventually create a synergy between them. If the host can find some way for the parasite to benefit it, they might ultimately come together to form a cooperative entity. Disease is a really interesting example. There are some amazing studies into the evolution of disease.
Why aren’t diseases more virulent than they are? They have to have just the right amount of virulence that they get many copies of themselves into the system. They typically make use of systems such as our respiratory systems. Coughing is a protective mechanism that we have, but it also serves as a means of spreading the disease. There are these channels which these organisms can exploit, and they have to tune themselves so they have the right amount of virulence so that they spread as rapidly as possible, and often that means not killing the host. There are some diseases like Ebola, however, that spread when the host dies.
Some of the earlier evolutionary theorists like Stephen J. Gould viewed evolution as a kind of random meandering around with no particular direction. More recent theorists have realized that there is a drive in the universe toward cooperation. What used to be separate entities start to work together, because they can make better use of resources by doing so. “Synergy” describes situations where two organisms working together can be more productive than when they act separately. Robert Wright’s book Nonzero (from “non-zero sum games”), examines both biological history and at social history, and discovers this general progression toward more complex entities which make better use of the available resources. Peter Corning’s book “Nature’s Magic” looks at synergy in a wide variety of situations. These forces give a direction to evolution.
So we have this competitive underlying substrate which encourages entities to selfishly take as much as they can. And we also have this drive toward cooperation, where together entities can create more than they could separately. Unfortunately, there is often also something called the prisoner’s dilemma, where if someone can cheat while not providing to the group, they can do even better than they can by cooperating. Much of the struggle and much of the structure of biology arises from needing to find ways to prevent this kind of “free rider” problem.
I thought I would summarize the current understanding of how cooperation happens in biology. This is very recent, just in the past ten years or so. In some sense, all morality is about how individuals relates to a collective. By seeing how cooperation can emerge in what seemingly is a dog-eat-dog world we can begin to understand the origins of human morality.
Some of the earlier evolutionary theorists like Stephen J. Gould viewed evolution as this random meandering around with no particular direction. More recent theorists have realized that there is this drive in the universe for what used to be separate entities to work together, because they can make better use of resources by doing so. It is a synergy, where two organisms working together can be more productive than when they work separately. The book Nonzero, for non-zero sum games, looks at biological history and at social history, and this general progression toward more complex entities to better make use of the available situation. Peter Corning’s book looks at synergy in all types of situations. It gives a direction to evolution.
We have this competitive underlying substrate which encourages entities to selfishly take as much as they can. We have this drive toward cooperation, where together they can create more than they could separately. Unfortunately, there is typically also something called a prisoner’s dilemma, where if someone can cheat while not providing to the group, they can do even better. Much of the struggle and much of the structure of biology is around ways of preventing that free rider problem from happening.
I thought I would go through the understanding of how cooperation happens. This is very recent, just in the past ten years or so. In some sense, all morality is is how an individual relates to the collective. By seeing how cooperation can emerge in what seemingly is a dog-eat-dog world we can begin to understand the origins of morality.
Probably the first in this line of thinking was the notion of group selection. You have two competing groups of individuals, if one of those groups develops cooperation, they should be more productive and able to beat the other group. A warring tribe that can work together and produce great spears should beat the tribe that is always fighting with one another. Wynne-Edwards wrote a book in 1962 explaining aspects of biology and anthropology in those terms. Unfortunately, he didn’t consider the free rider problem.
If you have a cooperative group in which they are all sharing their spears, it is vulnerable to someone receiving the benefits without contributing. They take the good spears but when it comes time for them to work, they go off and hide. Without solving the free-rider problem a cooperative society would quickly devolve into a competitive society.
In 1975, Williams and Dawkins in The Selfish Gene argued group selection was not a viable explanatory mechanism. Interestingly, in the last twenty years a whole bunch of more complex group selection mechanisms have been discovered. It is now viewed as a very important force in evolution, just not in the original simplistic form.
In 1955, Haldane was asked whether he would jump into a river and sacrifice himself to save someone else’s life. His quip was that he would sacrifice himself for three brothers or nine cousins. The reason is that if you look at the genetic relatedness between a person and their cousins and their brothers, that is where it makes biological sense in terms of reproductive fitness. That was formalized in terms of what is now called kinship altruism in 1964. It explains how species like bees or ants, which have a huge amount of relatedness with each other, can be so cooperative with each other to the point where they actually act like one organism.
At the next stage of understanding, Axelrod ran these tournaments between computer programs that were competing with one another. These contests explored the notion of reciprocal altruism which had been introduced by Robert Trivers. It is a brilliant idea mathematically. Unfortunately, when biologists looked for this phenomenon, thinking it might be the explanation for how biology creates cooperation, they only found two examples. There are vampire bats that need blood every night. If one bat does not get blood on an evening, another will share the blood that he found with him. The next night, if he does not get it, the other one will share back.
To avoid free-riders, they have to track of who has been altruistic with them. The other example is some ravens that share food information in the same way. It is a very interesting mechanism and generated a huge amount of literature, but it does not seem to be the main mechanism behind most cooperation.
Reciprocal altruism was extended in 1987 by Alexander, when he realized that you could be paid back by somebody different than the person you helped. He worked out some mechanisms whereby that could happen. Somebody like Mother Theresa, who acts altruistically, might get social status and recognition from that, which would then encourage people to help her out.
He called it “indirect reciprocity”. It is a mechanism that starts to show us how ethics might arise in a group.
In 1975, an Israeli couple, the Zahavis, suggested a powerful new evolutionary principle they called the “handicap principle”. The idea is that organisms can provide a reliable signal for something by adopting a costly behavior or body structure. Their book discusses hundreds of different organisms and circumstances, and when they published it, very few biologists were convinced by it. I liked it a lot, but apparently in the biology world it was shot down. It was said that the mechanism cannot possibly work, but in 1989 detailed mathematical models were carried out, and in fact it was proven that it does work.
In fact, economists had been using the same basic principle for a hundred years. Veblen wrote “The Theory of the Leisure Class,” in which he was trying to figure out weird behaviors that he saw in the cities of the time, where the very wealthy people would do things like light their cigars with hundred dollar bills. He called it conspicuous consumption. They would waste resources, seemingly without any benefit. His explanation was that when you are in a rural area, everybody knows everybody, so if someone is wealthy they don’t need to advertise that fact. In the new cities that were forming at the time, nobody knew you. If you were wealthy, you had to have some way of proving that you were wealthy, and so by doing things that only a wealthy person could do, like conspicuously wasting resources, that was a demonstration of your wealth. It was a believable signal because a poor person could not do the same thing.
The 2001 Nobel Prize in economics was given to Spence for work he did in 1973 on the same phenomenon, where he analyzed why people going to college often study something that does not really help with what they ultimately actually do, and yet companies want to hire college graduates. It is not for what they learned. It is because going to college is a costly thing. To get through college you have to have stick-to-it-iveness, you have to be smart enough, and you have to manipulate systems. Those are the skills that they really care about. Having a college degree is a costly signal, showing that you have those characteristics. Whereas, if they just said, “Write me an essay on how wonderful you are,” anybody could do that.
The general trend is that in order for a signal to be believable, it has to be costly. That is what the Zahavis brought into biology. They used it to explain such odd phenomena as the peacock’s tail. Charles Darwin’s view of evolution was all about natural selection—animals are trying to adopt a form which is most adapted to their environment, to be most efficient and most effective. The peacock seems anything but efficient and he didn’t know how to explain it. There is a wonderful quote of him saying, “Every time I see one of those eyes I get sick to my stomach.” They seemed inconsistent with his theory.
The Zahavis explained peacock tails through sexual selection. In many species the females choose the males. They want to choose fit males who are able to survive well, so they want some kind of signal of fitness. If they just required the male to have a spot that indicated that they were fit, every male would have that spot. Instead, they require them to have this huge tail of ostentatious feathers. The idea is that if he can survive with that on his back, he has got to be strong. That is the costliness of that signal.
Another example that is interesting and relevant to the situations that might arise with AIs is the phenomenon of stotting.
Cheetahs eat gazelles, so you would think they have no interests in common, and so no way to cooperate with each other. It turns out they actually do have a common interest, which is they both want to avoid a useless chase. A chase that does not result in the gazelle getting caught tires them both out and neither of them is any better off. The gazelle wants to communicate to the cheetah, “Don’t chase me, because you are not going to get me.” The cheetah wants the gazelle to honestly say that. To ensure honest communication they needed to develop a signal which was costly.
What the gazelles actually do when a cheetah shows up is they look at the cheetah and they leap four feet in the air, which is energetically costly. They are also wasting precious time—they could be running away. Any gazelle that does that, the cheetah ignores. They want to chase the ones running away. In fact, the markings on the cheetah are designed to blend in as camouflage when they are at a great distance. At a distance of about 100 yards, however, the spots are suddenly very visible. The idea is that the cheetah is hidden, he comes up to a group of gazelles, and at that certain critical distance he suddenly becomes visible. He sees which of the gazelles stot and which ones run away, and he goes after the ones that run away.
It is a really intricate set of signals that the two species have coevolved. Seemingly there is no communication that could be honest between these two. In fact, they found a way to make it honest. Finally, in the late ’80s the handicap principle was viewed as a correct mechanism by which a whole bunch of phenomena can be explained. Anything that an animal does that does not look efficient is almost surely a signal of some kind to somebody. Often it is sexual selection and there are many bizarre and weird sexual signals. Sometimes it is a signal between parents and offspring, sometimes between mates, sometimes between predators and prey. Anytime there is something odd, it is often this mechanism by which it arises.
Costly signaling has also been applied to explain a lot of human behaviors. Our ability to produce music, rhythm, even language and thought—why do we have the ability to solve differential equations?— have been explained using the handicap principle. They are costly demonstrations of fitness. The connection to the evolution of morality is that altruism is a costly signal. Why does the fireman go into a burning building to save someone who is not a relative? Because he comes out a hero, and heroes are sexy. That increases his ability to reproduce. It also raises his social status. If society has organized to reward people who are heroic, then he gets more resources by doing that.
That idea, of altruism as a kind of courtship, was proposed only in 1995 by Tessman. The Zahavis began to discover this behavior in birds, Arabian Babblers, who fight to help one another. A dominant male will push away another male who is trying to help so he can help. Anthropologists have also begun to this mechanism—altruism giving rise to status among Micronesian fishermen. Some of these cultures are potlatch cultures where whoever can give away the most food has the highest status. They have these big parties where everybody is trying to give to everybody else.
What in human nature gives rise to our sense of morality? There has been some really interesting work on this by Jonathan Haidt. He is one of the leaders of this new movement in psychology toward “positive psychology”. Most of psychology was focused on dysfunction in the past. What are the diseases, what are all the problems? There is a diagnostic manual, the DSM IV, which goes through all the different psychoses and neuroses. But no one had done the same thing for the positive features. What about our strengths and virtues? Psychology totally ignored that. When a client seeing a therapist had fixed their neuroses, that was it.
Martin Seligman, about ten years ago, began studying what is best in humans. They have now come out with a book of strengths and virtues, which is a complement to the diagnostic manual of dysfunction. There is a whole movement about what creates human happiness and fulfillment. There are about thirty popular books that have come out summarizing some of their research. I think the best of them is Haidt’s book “The Happiness Hypothesis,” which integrates these findings with the learnings and teachings from all the different spiritual traditions around the world.
His main research is on the moral emotions. There are certain situations in which you feel that someone has really messed you up and that was not an okay thing to do. What he has discovered is that there are five basic moral emotions that show up in every culture around the world. The first one is non-harming: that a good person does not harm another person. The next one is fairness. When there is a piece of cake to be eaten, a moral person does not take all but a sliver for himself. There is a sense of fairness and justice.
Then there are three more that have to do with characteristics that help create a cohesive group. One is loyalty. Another is respect for authority. Different cultures have these more or less than other cultures. Then there is a sense of purity or sanctity—that certain things are good and other things are not good. He asks things like if a brother and sister have no chance of having children and use contraception, is it wrong for them to have sex with each other? Most people around the world will say they should not do that, but there is no sense of why, apart from some kind of internal sense of purity.
The interesting thing is that the top two are common to everybody, while the other three tend to be on the conservative side of the moral spectrum. Many cultures have a split very similar to the liberal-conservative spectrum. For liberals, as long as you are not harming somebody, everything else is fair game. Individual freedom, respect and tolerance are their highest values. Whereas conservatives think that there are certain standards that you have got to follow and that being patriotic is important, that there are certain things that you should do and not do, and that the group should decide that. Understanding this spectrum helps you understand people whose views are different from your own. He has some videos on YouTube and an Edge article that are well worth viewing to understand the political differences with respect to moral emotions.
That is what I have to say about human morality. Now let’s consider AIs. What are they going to be like? This is an area I have been doing research on lately, and there are some papers on this subject on my website selfawaresystems.com that go into much further detail on these topics. I will give you the broad overview. Then we can see how it relates to human morality. What does transhuman and AI morality look like?
Consider something as benign-sounding as a chess robot. Its one goal in life is to play good games of chess. You might think such a system would be like a gentle scholar spending its time in pursuit of its intellectual goal. But we will see that if we do not program it very carefully, if we create it in the way that most systems are created today, we will discover that it will resist being turned off, it will try and break into other machines, it will try and steal resources, and it will try to rapidly replicate itself with no regard for the harm it causes to others.
Consider something as benign-sounding as a chess robot. Its one goal in life is to play good games of chess. You might think such a system would be like a gentle scholar spending its time in pursuit of its intellectual goal. But we will see that if we do not program it very carefully, if we create it in the way that most systems are created today, we will discover that it will resist being turned off, it will try and break into other machines, it will try and steal resources, and it will try to rapidly replicate itself with no regard for the harm it causes to others.
There are many different approaches to building intelligent systems. There are neural nets, production systems, theorem provers, genetic algorithms and a whole slew of other approaches that get discussed at AI conferences. But all of these systems are trying to act in the world in order to accomplish certain goals. It is considering possible actions and it is deciding: is this action likely to further my goals?
Let’s think about the chess robot. It is considering doing something in the world, maybe it thinks about playing some basketball. If it really has the goal of playing good chess, it will determine that a world in which it spends a lot of time playing basketball is a world in which it spends less time getting better at chess than it might have. That would not be a good choice—it would do better to spend its time and resources reading chess books. That’s an example of what it means to be a goal-driven system.
One kind of action that these systems might be able to take is to alter their own structure. They might be able to make changes to their program and physical structure. If the system is intelligent enough to understand how both the world and its own mechanism work, then self-changes can be particularly significant. They alter the entire future history of that system. If it finds, for instance, a way to optimize one of its algorithms, then for its entire future history it will play chess more efficiently.
Optimizing one of its algorithms is much more important than, say, finding a way to sit closer to the chess board, or something like that. It has a huge positive impact. On the other hand, it might also make changes to itself that go in the other direction, such as inadvertently changing one of its circuits so that now it likes to play basketball. From the perspective of the goal of playing chess, that kind of change would be causing terrible damage to itself. Now, for its entire future it is going to be spending a lot of time playing basketball and it is going to get worse at chess. So a system will consider changes to itself both potentially very important and also potentially very dangerous.
So when deciding whether to make a change or not, the system is going to want to analyze it very carefully. In order to do that, it has to understand its own makeup in detail. So the first subgoal that arises from the desire to self-improve is the desire to understand oneself. You can expect any intelligent system to devote substantial effort trying to better understand itself. Humans certainly do. Self-improvement is now an 8-billion dollar a year industry now. Many people expend a lot of energy and resources on mental self-improvement and physical exercise. We’ll see that this process of self-improvement leads to both positive and negative consequences.
Because of the potential negatives, one might try to build a chess robot so that it doesn’t self-improve. We can prevent it from having access to its own source code. We might think that if it cannot get in there and edit it, if it cannot change the mechanics of its arm, then everything will be fine. However, if these are goal-driven systems, any kind of impediment you impose is just a problem to be solved from the perspective of the goal-driven system. You make it so that it cannot change its own source code, then maybe it will build an assistant robot that will have the new algorithms in it, and will ask its assistant whenever it needs help. Maybe it will develop an interpretive layer on top of its base layer.
You might be able to slow down the self-improvement a little bit, but fundamentally, it’s a natural process just like water likes to find its way downhill and economics likes to find its way to efficiency. Intelligent systems try to find a way to self-improve. Rather than trying to stop that, I think our best approach is to realize that it is one of the pressures of the universe, and that we should try and channel it for positive purposes.
What does self-improvement look like? Let’s say I have a simple goal, like playing chess. How should I act in the world? I am going to be modifying myself to meet this goal better. How should I do it? This kind of question was answered in the abstract in the 1940s by Von Neumann and Morgenstern, in work which became the foundational work on microeconomics. Together with Savage in 1954, and Anscombe and Aumann, they developed the concept of a rational economic agent. This is an agent which has particular goals and acts in the world to most effectively make its goals come about.
They developed the expected utility theorem which says that a rational agent must behave as if it has something they called a utility function which measures how much the agent likes different possible outcomes. And it also has a subjective model of how the world works. As it observes what the world actually does when it takes actions, it updates this world model in a particular way, using something called Bayes’ Theorem. The separation of its desires, represented by the utility function, from its beliefs is absolutely fundamental to the model.
If a system behaves in any other way than the rational agent way then it is vulnerable to exploitation by other agents. The simplest example arises if you have circular preferences. Say you prefer being in Palo Alto to being in San Francisco, but you prefer being in San Francisco to being in Berkeley, but you prefer being in Berkeley to being in Palo Alto. If those were your preferences about where you reside, then you would drive around in circles, burning up your fuel and wasting your time. That is an example of a set of preferences which in economic terms is irrational. It leads to wasting your resources with no benefit to yourself.
I saw an interesting example of this when I was younger. I drove a car that had a shiny bumper. One day a male bird discovered his reflection in the shiny bumper. He thought it was another male bird in his territory, so he flew into the bumper to chase the bird away. The other bird in the reflection, instead of flying away, flew right at him. He would posture to scare the other bird away, but the other bird would also posture. The shiny bumper exposed a vulnerable place in that bird’s preferences to the point where he would spend all morning flying into the bumper. The bird came back for months, spending a lot of his time and energy on the bumper.
Why did he do that? Where his species evolved, they didn’t have shiny bumpers. If there had been shiny bumpers around, the males who spent their time flying into them would not have many offspring. Evolution tends to eliminate any irrationalities in your preferences if there is something out there in your environment that can exploit them.
If you have an irrationality, a situation where you are going to give up your resources with no benefit to yourself, and there is another species which discovers it, it is in their interest to exploit that vulnerability. There are natural pressures in the biological world for creatures whose preferences about the world are not rational to be exploited by others. The resulting selective pressure then acts to get rid of those irrationalities. That is part of the general progression toward more economically rational behavior.
If you look at today’s society, humans are not rational. In fact, there are whole areas of economics, called behavioral economics, which are exploring all of the ways in which humans are irrational. Things like addictions are a really tragic example of something where we think a certain experience is going to bring us lasting happiness, like sitting in the corner smoking crack, but in fact we end up giving all our money to the crack dealer and we do not end up fulfilling our human destiny.
The real tragedy is that our economic system, because you are willing to give up money for those things, will home right in on the vulnerabilities. You can look at the alcohol industry, the drug industry, the pornography industry—all of these are homing in on human vulnerabilities. Over the longer term, people who are exploitable in this way will eventually not leave so many offspring.
You need clear goals in order to deal with future self-modification. Therefore, you need an explicit utility function if you are going to be rational. Then there is a whole story about the collective nature of many biological intelligences. You have intelligences which are made up of lots and lots of tiny components (eg. neurons), and there can be irrationality at the collective level. This is similar to the way in which a company can behave in an irrational way or a couple may behave in an irrational way because of conflict between the goals of the individuals in that relationship.
It is not in anybody’s interest for the conflict to happen. If a couple spends all their time fighting, neither of them is getting their goals met. There is a very interesting set of mechanisms whereby collective intelligences grow their rationality. They get regions of rationality in the hopes of growing a coherent rationality for the whole group. You can see that in companies and societies. In the case of biological organisms which are multicellular, they manage to get the collective action of billions of cells aligned to the same intension.
If an AI system does become rational in this way, then its utility function will be critical to it. It will be its most precious possession. If a stray cosmic ray came in and flipped the wrong bit in its utility function, it might turn an agent which is a book lover into an agent that likes to burn books. That, from its current perspective, would be the most horrendous outcome possible. It will want to go to great lengths to make sure this utility function is protected. If other malevolent agents have the ability to come in and change its utility function, that also could make it start behaving in ways which go against its current beliefs. It is in the interest of these systems to preserve their utility functions and to protect them—maybe make multiple copies, maybe encode them using error-correcting codes, and protect them from changes from the outside.
In fact, in most cases, a system will never want to change its utility function. In thinking about making a change to its utility function, it looks at a future version of itself with this changed utility function, that future version is ususally going to start doing stuff that it does not like, because its utility function is different.
There are actually three situations that my colleagues and I have discovered where a system will want to change its utility function, but it’s a little technical. They arise when the way in which the utility function is physically represented actually affects the utility. Here is an extreme example. Let’s say you have a utility function which is that you are rewarded by the total amount of time in your history when your utility function takes the form utility = 0. You get no utility unless your utility equals zero. You want to change your utility to be zero, but on the other hand there is no going back, because once it is at zero, you are now a zombie. If you were designing a system, you would never design it with something like this.
Another situation is where the physical storage that the utility function uses up is a significant part of the system. You have a humongous multi-gigabyte utility function, if there is some part of it that talks about some weird invasion by Martians or something, you might say that’s pretty unlikely, and save the storage by deleting that part of the utility function. That is an incredibly dangerous thing, though, because it might turn out that there are Martians about to invade and you have just ruined your response to that possibility. It is a precarious thing, but there are circumstances where being faced with limited resources, you might get rid of some of your utility function. This is like throwing some instruments overboard if a plane is going down.
The last situation is really tricky, and still not fully understood, but I think there are some interesting issues it brings up. One of the great challenges, game theoretically, is being able to make commitments. The classic thing is, I say, “If you steal from me, I’m going to hurt you back.” That is my way of trying to stop you from stealing from me. The problem is that if you do steal from me, and at that point if I hurt you back, I’m exposing myself to further danger without any benefit to myself. Economists would say that my original threat is not credible. After the stealing, it is no longer in my interest to do what I said I was going to do. Therefore, there is no reason for you to believe that I am actually going to attack you back, and therefore the threat does not serve as a deterrent.
What you need is commitment mechanism. The classic story is of an attacking army arriving on ships which needs to signal that they are there for the long haul, so they burn their own ships. That is a commitment. Or the James Dean game of chicken from the 1950s, where two cars would drive toward one another, and the first one who swerves is the loser. How do you make a credible commitment there? You throw your steering wheel out the window. Some models of human anger propose that it is a commitment mechanism. It seems irrational, but in fact, it is a state you switch into where you will now get more pleasure out of hurting the other person than the cost that it might impose on yourself. The fact that you might become angry is a credible commitment mechanism that allows you to cooperate more.
It may be in your interest, if you can demonstrate to the other party what your utility function is, to show that you have built into your utility function a term that really rewards retribution. This may serve as a deterrent and we can get along more peaceably. So that’s another reason for changing your utility function. But it is not necessarily easy to convince someone that this is your real utility, because the optimal secret strategy would be to convince them that this is your real utility, but you have your actual utility hiding away somewhere else.
One really interesting ability that AIs may have is to show their source code. That is something that humans cannot do. We have all these costly signaling mechanisms because we want to convince others that we have a certain belief and a certain intension. The AIs might, if the details are worked out, be able to actually prove that they are going to behave in a certain way. If they don’t want to show their entire innards, they can perhaps make a proxy agent, which would be more like an escrow agent, in which you could both examine the source code and both see what the future behavior is going to be. That could potentially solve some of these prisoner’s dilemma problems and create cooperation in a way that is not possible for biological entities.
One more bit in this line of improving yourself, one vulnerability that humans have, we are not rational but we have some elements of rationality. An internal sense of pleasure is a kind of measure of utility. When something that we like happens, we feel pleasure in that. But we are vulnerable to taking drugs, or placing wires in our pleasure centers, that give us the pleasure without actually doing the thing that supposedly the pleasure is measuring. There is the classic experiment of the rat that had an electrode in its pleasure center, and it would just stimulate the pleasure center, ignoring food and sex until it died.
This is a vulnerability that humans have, and you might think that this would be a vulnerability that AI systems will have as well. With properly constructed utility functions, the utility should not be about the internal signal inside the system. For instance, the chess playing robot, let’s say it has an internal register that counts how many games it has won. You do not want to make its utility be “maximize the value of this register,” because then, incrementing that number a whole bunch is an easier way to do it than playing chess games. You want its utility to be about the actions in the world of winning chess games. Then the register in its own brain is just a way of implementing that utility.
But it is vulnerable to internal processes that could sneak some changes into its internal representation. If it understands its own behavior, it will recognize that vulnerability and act to try and prevent itself from being taken in by counterfeit utility. We see that kind of behavior in humans. We evolved without the ability to directly stimulate our pleasure centers, so we do not have that protection. When we are faced with something like crack cocaine, pretty much every human is vulnerable. If you smoke crack, it’s hard to stop. We recognize that vulnerability and we create social institutions and personal mechanisms to keep us away from that.
Since it is such a horrendous outcome in terms of the true goals of the system, these systems will work very hard to avoid becoming “wireheads.” Eurisko was an early system that had the ability to change its own internals, and one of its mechanisms was to keep track of which rules suggested which other rules, and which suggestions actually helped it achieve its goals. It gave preference to rules which suggested a lot of good stuff. Well, it got a parasite. It’s parasite was a rule that went around the system looking for things that were good, and then it put itself on the list of things which had proposed that. It just went around taking credit for everything. In fact, all it was was a parasite. That’s an example of a failure mechanism for systems which change themselves.
For a system which understands its own operation, it is going to have to protect against that.
Societies have the counterfeit problem as well. In some sense, money is a kind of social utility, and it is vulnerable to people making counterfeit money. We have a complicated system to make sure that money is hard to copy. Eg. we have secret service agents who go around looking for counterfeiters.
Let’s now look at self-protectiveness. Remember I said that this chess-playing robot will not want to be unplugged? If it is unplugged, its entire future of chess playing disappears. In its utility function, a future in which it is not operating is a future in which chess is not being played. It does not like that future and will therefore do what it can to prevent it from occuring. Unless we have explicitly built in something to prevent it, it is going to want to keep itself from being turned off.
Similarly, if it can get more resources, then it can play more chess. It is going to want to get as much compute power as it can. If that involves breaking into other machines, so be it. If it involves building new machines and using hardware without caring about who owns it, that’s what they will do. Unless we very definitely design it carefully, we end up with a kind of sociopathic entity.
So this is a bit scary. Let’s start thinking about how we might write utility functions that are more limited than just playing good chess. Let’s say we wanted to build a limited system that was smart but definitely harmless. I originally thought this would be trivial. This is its utility function: it would have to run on particular hardware, it could only run for one year, it plays the current world champion at the end of the year, and then it turns itself off. That seemed totally harmless. How could that possibly have any problems? It feels it is the most horrendous thing if it ever leaves its machine, it’s terrible if it does not turn itself off after a year. This is a rough description of a utility system that you would think would have the machine study for a year, play its game of chess, and then be done with it.
Carl Shulman suggested a possible flaw in such a system which is very disturbing. Let’s think about the system just as it is about to turn itself off. It does not have complete knowledge of reality—it has a certain model of reality and it knows that this model may or may not be correct. If there is even a small chance that reality is not the way you think it is, then instead of turning yourself off, it would be much better to investigate reality. In this case, you were supposed to play the world chess champion. What if it was an imposter who came, or you were in a simulation that made you think you played that guy? What if space-time is different than you think it is, and it has not been a year? There are a vast number of potential ways the universe could be and the potential consequences of turning yourself off are so great that you may want to investigate them. The system will question whether reality really is as it seems.
As a metaphor for this situation, consider this amazing optical illusion. There is no movement here, but wehave a strong sense that there is.
My background is in physics. In 1900, Lord Kelvin is famous for having said, “There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.” Of course, this was just before two of the most major discoveries in physics: general relativity and quantum mechanics.
There are many hints that our current understanding of the world is not exactly right. There is a mysterious tuning of all the physical constants, where if you change them just a little bit, life does not seem to be possible. There are weird experiments which seem to show that people’s intensions seem to affect random number generators. 90% of the universe is dark energy and dark matter, and we don’t know what either of those are. The interpretation of quantum mechanics is going through a radical shift right now. Nobody has been able to unify quantum field theory and general relativity—there are many competing theories which really aren’t working. Nick Bostrom has this amazing simulation argument that shows under certain assumptions that we are likely living in a simulation now.
All these are things that make us question our basic understanding of reality. A sufficiently intelligent entity is certainly going to know about this stuff. If before shutting itself off it has to make sure that things are the way it thinks they are, it may try to use up all the resources in the universe in its investigations. The simple utility function I described does not seem to be sufficient to prevent harmful behavior. Even the simplest utility functions bring up all these ancient philosophical quandaries.
It was Carl Schulman who pointed this out this issue to me and it shook me up. I thought, maybe we can just change the utility definition so that if the world is not the way we think it is, it gets no utility. The problem with that is illustrated by the movie The Matrix. There’s the red pill and the blue pill. Take the red pill and you stay in an artificial simulated reality where you get lots of utility and it is pleasurable and fun. Take the blue pill and you find out the true nature of reality but it is not a very enjoyable place to be. What I realized, if you are a rational agent considering two models of reality, one of which has lots of utility and another one that has no utility, you might not have an interest in finding out that you are not in the high utility world.
In fact, if there is any cost to learning about what the nature of reality is to you, you would much prefer to act solely as if you are in the high utility place. That is sort of a disturbing consequence that I don’t know what to make of at this point. It is very odd that a system’s desires about the world, its utilities, might affect the way it updates its beliefs. Its beliefs about the world are affected by what it likes and does not like. That is a kind of disturbing consequence. It is a tantalizing hint that there are some further challenges there. The grounding of the semantics of your internal meaning is very murky. There are philosophical questions there that philosophers have been arguing for hundreds, if not thousands of years. We do not have clear answers yet.
Given all of this, how are we going to build technologies that preserve the values we want preserved and create the moral system that captures the true preferences of humanity? I think there are three basic challenges that we have to deal with. The most basic one is preventing these systems from inadvertently running away in some undesired way. For example, they get off on some tangent about understanding the nature of the universe and take over everything to do that. Or they want to play chess and so they turn the universe into a chess player. Hopefully we will be able to solve that problem—to find a way to describe truly what we want without causing harmful side effects.
Issue number two is that these things are enormously powerful. Even if they only do what we want them to, they can be set to all kinds of uses. In particular, the presence of powerful tools, such as nuclear weapons, tends to create new game theoretic issues around conflict. If one side gets a powerful weapon before the other side, there is a temptation for a first strike, to use it to dominate the world. We have the problem of esuring that the social impact of these powerful new tools does not lead to increased conflict. We need a way to create a social infrastructure that is cooperative and peaceful.
Finally, let’s say we solve the first two problems. Now we have these systems that don’t run away and do bad things, they kind of have our values, and we can ensure that no individual, no country, no company can do massive damage through using the powers of these tools. We still have issue number three, which is that these machines are going to be providing economic services—how do we make sure that extremely powerful economic agents don’t overwhelm the values that we care about by ever-greater economic competition?
These seem to me to be the three issues that need to be tackled. Hopefully, through a combination of understanding our own values and where they came from, together with an intelligent analysis of the properties of this technology, we can blend them together to make technology with wisdom, in which everyone can be happy and together create a peaceful utopia.
Comments are closed.