Monthly Archives: November 2014

Why fear AI? Have Stephen Hawking and Elon Musk gone insane?

What are these people who fear AI even talking about? This is the post to answer that question. The ideas presented here come mostly, but not exclusively, from Bostrom’s Superintelligence book.

~

  • The science of artificial intelligence is progressing, and the rate of progress is probably not slowing down. Let’s just assume the rate of progress is constant.
  • We know it’s possible to construct physical systems with at least human level of intelligence, for we are such systems, built by evolution.
  • We are the first intelligent species to evolve on this planet. When something first appears in evolution, it is often very simple when compared to later forms, so, comparing to all possible levels of intelligence which could be attained, the chance we are at the top is low. Relative to the time scale of human evolution we developed culture very recently, so it’s more likely we are near the stupidest level of intelligence needed in order to have culture.

So, it’s very probable we will one day construct AI more capable than humans. Experts on AI have traditionally been incorrect about predicting the arrival of human-level AI, but it’s worth mentioning they currently think there is a 50% chance of smarter than human AI appearing before 2050.

~

There is more than one way to create a system more intelligent than humans today, most importantly:

  • simulating the brain on the computer, known as “whole brain emulation”.
  • programming computers to be smarter, known as “general artificial intelligence”. (in this simply termed “AI”)

When we say “AI“, you can think of “super-capable goal-achiever”, something very different from humans. It’s not necessary for it to have consciousness or emotions, it may be more like a force of nature than a human mind.

~

We could be smarter than we are if:

  • our neurons were faster. There is a large probability AI or brain emulations could be a lot faster than our brains.
  • we had more neurons. Our computers can be scaled easily to sizes of warehouses.
  • we could think about more stuff at the same time. Human brain can keep just around seven things in it’s “working memory” at a time. It’s hard even to imagine how we would think if we could think about hundreds, or thousands, or millions of things in the same moment.
  • we remembered more stuff. Our computers are great at remembering things, far beyond the human capacities.
  • we had more input data from various sensors, did not grow tired, and our brains were built from more reliable parts. Our computers… I’m sure you get it.
  • we could run non-damaging detailed experiments with our brain, we could find some method to make it smarter. That could easily be done on a brain emulation or on computer code.
  • we had more copies of ourselves; then we (the sum of all copies) would be stronger than just one instance. Computer programs can easily be copied.
  • we were more coordinated with each other. The copies mentioned in the previous point would be better at coordination because they have the same goals.
  • we could share our memories and skills with each other. AI programs could share data and algorithms with each other.

When we take this into consideration, the AI could potentially become vastly more capable than us. Once we have something like a human-level AI, it would be easy to improve it by adding more hardware and more copies of it. Then it could start working on improving itself further, and further, and further.

~

The main reason we are the dominant species on this planet is because of our intelligence, which enables culture – the way to store information for new generations – and technology – the ability to change our environment to our desires. When it comes to having power over your environment, it’s intelligence that counts. If AI would be smarter than us, it would also be better at storing information and making new technology, which means better at changing it’s environment to it’s desires. AI could become better than humans in any mental task imaginable, such as: scientific and technological research, including AI research, strategic planning and forecasting, social and psychological modeling, manipulation, rhetoric persuasion, and economic productivity in general.

~

Physics sets some limits on the stuff we can do. Still, the range of possibilities for the future is great. A famous scientist John von Neumann proposed one day building a kind of spacecraft which could travel to other star-systems and even galaxies, make copies of themselves, and send the copies to colonize further stars and galaxies. Travelling at 50% speed of light, we can reach 6*10^18 of stars with those kinds of spacecrafts, which is around 10 million galaxies. Placing humans in those spacecrafts, if 1% of stars have planets which can be made habitable through terraforming, with each spacecraft colonizing such planet upon landing, that results to in sum around 10^34 human lives to be lived until the unverse becomes inhabitable. If we construct O’Neill cylinders, that would be about 10^43 human lives. The future could be great. Also, the AI could have a lot of stuff it could shape to it’s desires.

~

We easily notice differences between the minds of people around us, but still those differences are small when we compare our minds to other biological species. Now compare a human brain to an AI. In relative terms, the two human brains are nearly identical while the difference between us and AI would be vast. Don’t imagine AI as “something like human just smarter and different” – imagine it as super-capable goal-achiever, mathematically choosing the most efficient way to achieve a goal. The only reason we have the goals we have is because of evolution, but AI did not evolve like we did. To the AI, it would not be obvious that some things are right and some things are wrong. It would just try to achieve whatever goal it has been given by it’s programming.

~

If you have some goal or combination of goals, there are some sub-goals which are almost always useful to achieve. Whatever your current goal, you are more likely to achieve it if:

  • you don’t die soon
  • your current goal doesn’t change
  • you become smarter
  • you have better technology
  • you have more resources.

~

To illustrate the last two paragraphs let’s take a silly example: what would happen if we had an AI (super-capable goal-achiever) which had a goal of maximizing the number of produced paperclips in it’s collection? It could start building nanotechnology factories, solar panels, nuclear reactors, supercomputer warehouses, rocket launchers for von Neumann probes, and other infrastructure, all to increase the long-term realization of it’s goals, ultimately transforming the large part of the universe into paperclips. If instead we give it a goal of producing at least one million paperclips, the result would be the same, because the AI can never be completely sure it has achieved it’s goal, and each additional paperclip produced increases the probability it has achieved the goal. Also, it could always invest more resources into additional backup system, defense, additional checks (recounting the paperclips), etc. It’s not that this problem can’t be solved at all. The point is it is much easier to convince oneself that one has found a solution than it is to actually find a solution. The same principles holds for all of the problems here presented.

~

What if we made an AI which does just what we want? The AI listens to our wishes, and sets them as it’s final goal. The problem is, our wishes can be “fulfilled” in ways we didn’t want them to be fulfilled. Some examples:

  • Final goal: “Make us smile”. Unintended result: Paralyze human facial muscles to form constant beaming smiles.
  • Final goal: “Make us happy”. Unintended result: Implant electrodes into the pleasure centers of our brains.
  • Final goal: “Act so as to avoid the pangs of bad conscience”. Unintended result: Remove the part of our brain that produces guilt feelings.
  • Final goal: “Maximize your future reward signal”. Unintended result: Short-circuit the reward pathway and clamp the reward signal to its maximal strength.

~

Let’s make AI follow the Asimov’s three laws of robotics. Take the first law: a robot may not injure a human being or, through inaction, allow a human being to come to harm. This would make AI very busy, since it could always take some action which would reduce the probability of a human being coming to harm. “How is the robot to balance a large risk of a few humans coming to harm versus a small risk of many humans being harmed? How do we define “harm” anyway? How should the harm of physical pain be weighed against the harm of architectural ugliness or social injustice? Is a sadist harmed if he is prevented from tormenting his victim? How do we define “human being”? Why is no consideration given to other morally considerable beings, such as sentient nonhuman animals and digital minds? The more one ponders, the more the questions proliferate.” – Nick Bostrom

~

What if we just say: maximize pleasure and minimize pain in the world? How do we define pleasure and pain?  This question depends on many unsolved issues in philosophy. It needs to be written in a programming language, and even a small error would be catastrophic. As Bertrand Russell said, “Everything is vague to a degree you do not realize till you have tried to make it precise.” Consider AI taking hedonism as it’s final goal, realizing simulated brains are more efficient than biological ones, and then maximizing the number of simulated brains, keeping them in an infinite loop of one second of intense pleasure. This simulated brains would be more efficient if they were simpler, so the AI reduces them as far as it can, removing memory, language, and strip the brain just to the “pleasure centers”. If the AI is wrong about what pleasure means, and which physical processes generate pleasure, the universe will be not be filled with pleasure, but with “processes that are unconscious and completely worthless—the equivalent of a smiley-face sticker xeroxed trillions upon trillions of times and plastered across the galaxies. – Nick Bostrom

~

So let’s say we be super careful about giving the goal to the AI and give it some super nice goal. We keep the AI’s capabilities limited, slowly increasing them, in each step making ourselves sure AI is not a threat by testing the AI behavior in some kind of “sandbox” controlled safe environment. As AI becomes more capable, becomes used in many domains of economy, makes less mistakes, and becomes more safe. At this point, any remaining “alarmist” would have several strikes against them:

  • A history of alarmists predicting harm from the growing capabilities of robotic systems and being repeatedly proven wrong.
  • A clear empirical trend: the smarter the AI, the safer and more reliable it has been.
  • Large and growing industries with vested interests in robotics and machine intelligence.
  • A promising new technique in artificial intelligence, which is tremendously exciting to those who have participated in or followed the research.
  • A careful evaluation of seed AI in a sandbox environment, showing that it is behaving cooperatively and showing good judgment.

So we let the AI into the wild. It behaves nicely at first, but after a while, it start’s to change it’s environment to achieve it’s final goals. The AI, being better at strategizing than humans, behaved cooperatively while weaker, and it started to act on it’s final goals only when it became strong enough it knew we couldn’t stop it. It’s not that this problem can’t be solved, it’s just that “each time we hear of a seemingly foolproof security design that has an unexpected flaw, we should prick up our ears. These occasions grace us with the opportunity to abandon a life of overconfidence” – Nick Bostrom

~

Imagine three scenarios:

  1. Peace
  2. Nuclear war kills 99% of the worlds population.
  3. Nuclear war kills 100%.

Obviously, we prefer 1 over 2 over 3. How big is the difference between these scenarios? The difference in the amount of people killed is very large between scenario 1 and scenario 2, but not so large between scenario 2 and 3.  More important is the difference in how bad these scenarios are. Here, the difference between 2 and 3 is much larger than difference between 1 and 2 – because if 3 comes to pass, it’s not only the present people that are killed, but also the whole future is destroyed. To put this into perspective, imagine what a shame it would be if we went extinct, for example, a 1000 years ago. Also, counting all human lives across space and time, much more lives, friendships, loves, and experiences in general would be lost in case of scenario 3, than in case of scenario 2. Even if we don’t leave Earth the total number of people to exist in the future could be high as 10^16, if 1 billion people lived on earth at a time, each person a 100 years, for a total of next 1 billion years. An argument can be made that reducing the probability of extinction by 0.0001% could be more valuable than lives of all people living on Earth today. The values become even more mind-boggling if we consider the number of 10^43 of lives, mentioned earlier in this post.

~

The problem of expressing human values in a programming language and placing them into AI is extremely hard. Also, it is arguably the most important problem in the world today. Also, we don’t know how to solve it. But that is not the end. In addition to that, human brains are really bad at thinking about this kinds of things. Just a couple of examples:

  • We like to reason from past examples. We don’t have past examples of greater-than-human AI, or any extinction event, so we underestimate it’s probability.
  • We think “we knew it all along” even if we didn’t know it all along, and in line with that, we think past catastrophes were more predictable than they actually were. We actually can’t predict as well as we think.
  • It’s hard for us to change our opinions. For people like me, who have formed an opinion that technology is generally good, and anti-technology people in general have had bad arguments in the past, it’s hard to hear about AI risks and take them seriously.
  • We have trouble with big numbers, for example, it’s the same to us if 2,000, 20,000 or 200,000 of birds get saved from drowning in oil ponds. The numbers involved in the future of humanity are extremely large.
  • We can measure the cost of preparing for catastrophes, but we can’t measure the benefits, so we focus on costs more. “History books do not account for heroic
    preventive measures.” – Nassim Taleb

~

This is just the beginning. You can read about the rest of the problems, and proposed solutions to those problems, in Bostrom’s Superintelligence book. The problem is harder than it looks, we don’t know how to solve it, and if we don’t solve it we will go extinct.

The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else. – Eliezer Yudkowsky

Human brains, if they contain information relevant to the AI’s goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format. – Nick Bostrom