Why fear AI? Have Stephen Hawking and Elon Musk gone insane?

What are these people who fear AI even talking about? This is the post to answer that question. The ideas presented here come mostly, but not exclusively, from Bostrom’s Superintelligence book.

~

  • The science of artificial intelligence is progressing, and the rate of progress is probably not slowing down. Let’s just assume the rate of progress is constant.
  • We know it’s possible to construct physical systems with at least human level of intelligence, for we are such systems, built by evolution.
  • We are the first intelligent species to evolve on this planet. When something first appears in evolution, it is often very simple when compared to later forms, so, comparing to all possible levels of intelligence which could be attained, the chance we are at the top is low. Relative to the time scale of human evolution we developed culture very recently, so it’s more likely we are near the stupidest level of intelligence needed in order to have culture.

So, it’s very probable we will one day construct AI more capable than humans. Experts on AI have traditionally been incorrect about predicting the arrival of human-level AI, but it’s worth mentioning they currently think there is a 50% chance of smarter than human AI appearing before 2050.

~

There is more than one way to create a system more intelligent than humans today, most importantly:

  • simulating the brain on the computer, known as “whole brain emulation”.
  • programming computers to be smarter, known as “general artificial intelligence”. (in this simply termed “AI”)

When we say “AI“, you can think of “super-capable goal-achiever”, something very different from humans. It’s not necessary for it to have consciousness or emotions, it may be more like a force of nature than a human mind.

~

We could be smarter than we are if:

  • our neurons were faster. There is a large probability AI or brain emulations could be a lot faster than our brains.
  • we had more neurons. Our computers can be scaled easily to sizes of warehouses.
  • we could think about more stuff at the same time. Human brain can keep just around seven things in it’s “working memory” at a time. It’s hard even to imagine how we would think if we could think about hundreds, or thousands, or millions of things in the same moment.
  • we remembered more stuff. Our computers are great at remembering things, far beyond the human capacities.
  • we had more input data from various sensors, did not grow tired, and our brains were built from more reliable parts. Our computers… I’m sure you get it.
  • we could run non-damaging detailed experiments with our brain, we could find some method to make it smarter. That could easily be done on a brain emulation or on computer code.
  • we had more copies of ourselves; then we (the sum of all copies) would be stronger than just one instance. Computer programs can easily be copied.
  • we were more coordinated with each other. The copies mentioned in the previous point would be better at coordination because they have the same goals.
  • we could share our memories and skills with each other. AI programs could share data and algorithms with each other.

When we take this into consideration, the AI could potentially become vastly more capable than us. Once we have something like a human-level AI, it would be easy to improve it by adding more hardware and more copies of it. Then it could start working on improving itself further, and further, and further.

~

The main reason we are the dominant species on this planet is because of our intelligence, which enables culture – the way to store information for new generations – and technology – the ability to change our environment to our desires. When it comes to having power over your environment, it’s intelligence that counts. If AI would be smarter than us, it would also be better at storing information and making new technology, which means better at changing it’s environment to it’s desires. AI could become better than humans in any mental task imaginable, such as: scientific and technological research, including AI research, strategic planning and forecasting, social and psychological modeling, manipulation, rhetoric persuasion, and economic productivity in general.

~

Physics sets some limits on the stuff we can do. Still, the range of possibilities for the future is great. A famous scientist John von Neumann proposed one day building a kind of spacecraft which could travel to other star-systems and even galaxies, make copies of themselves, and send the copies to colonize further stars and galaxies. Travelling at 50% speed of light, we can reach 6*10^18 of stars with those kinds of spacecrafts, which is around 10 million galaxies. Placing humans in those spacecrafts, if 1% of stars have planets which can be made habitable through terraforming, with each spacecraft colonizing such planet upon landing, that results to in sum around 10^34 human lives to be lived until the unverse becomes inhabitable. If we construct O’Neill cylinders, that would be about 10^43 human lives. The future could be great. Also, the AI could have a lot of stuff it could shape to it’s desires.

~

We easily notice differences between the minds of people around us, but still those differences are small when we compare our minds to other biological species. Now compare a human brain to an AI. In relative terms, the two human brains are nearly identical while the difference between us and AI would be vast. Don’t imagine AI as “something like human just smarter and different” – imagine it as super-capable goal-achiever, mathematically choosing the most efficient way to achieve a goal. The only reason we have the goals we have is because of evolution, but AI did not evolve like we did. To the AI, it would not be obvious that some things are right and some things are wrong. It would just try to achieve whatever goal it has been given by it’s programming.

~

If you have some goal or combination of goals, there are some sub-goals which are almost always useful to achieve. Whatever your current goal, you are more likely to achieve it if:

  • you don’t die soon
  • your current goal doesn’t change
  • you become smarter
  • you have better technology
  • you have more resources.

~

To illustrate the last two paragraphs let’s take a silly example: what would happen if we had an AI (super-capable goal-achiever) which had a goal of maximizing the number of produced paperclips in it’s collection? It could start building nanotechnology factories, solar panels, nuclear reactors, supercomputer warehouses, rocket launchers for von Neumann probes, and other infrastructure, all to increase the long-term realization of it’s goals, ultimately transforming the large part of the universe into paperclips. If instead we give it a goal of producing at least one million paperclips, the result would be the same, because the AI can never be completely sure it has achieved it’s goal, and each additional paperclip produced increases the probability it has achieved the goal. Also, it could always invest more resources into additional backup system, defense, additional checks (recounting the paperclips), etc. It’s not that this problem can’t be solved at all. The point is it is much easier to convince oneself that one has found a solution than it is to actually find a solution. The same principles holds for all of the problems here presented.

~

What if we made an AI which does just what we want? The AI listens to our wishes, and sets them as it’s final goal. The problem is, our wishes can be “fulfilled” in ways we didn’t want them to be fulfilled. Some examples:

  • Final goal: “Make us smile”. Unintended result: Paralyze human facial muscles to form constant beaming smiles.
  • Final goal: “Make us happy”. Unintended result: Implant electrodes into the pleasure centers of our brains.
  • Final goal: “Act so as to avoid the pangs of bad conscience”. Unintended result: Remove the part of our brain that produces guilt feelings.
  • Final goal: “Maximize your future reward signal”. Unintended result: Short-circuit the reward pathway and clamp the reward signal to its maximal strength.

~

Let’s make AI follow the Asimov’s three laws of robotics. Take the first law: a robot may not injure a human being or, through inaction, allow a human being to come to harm. This would make AI very busy, since it could always take some action which would reduce the probability of a human being coming to harm. “How is the robot to balance a large risk of a few humans coming to harm versus a small risk of many humans being harmed? How do we define “harm” anyway? How should the harm of physical pain be weighed against the harm of architectural ugliness or social injustice? Is a sadist harmed if he is prevented from tormenting his victim? How do we define “human being”? Why is no consideration given to other morally considerable beings, such as sentient nonhuman animals and digital minds? The more one ponders, the more the questions proliferate.” – Nick Bostrom

~

What if we just say: maximize pleasure and minimize pain in the world? How do we define pleasure and pain?  This question depends on many unsolved issues in philosophy. It needs to be written in a programming language, and even a small error would be catastrophic. As Bertrand Russell said, “Everything is vague to a degree you do not realize till you have tried to make it precise.” Consider AI taking hedonism as it’s final goal, realizing simulated brains are more efficient than biological ones, and then maximizing the number of simulated brains, keeping them in an infinite loop of one second of intense pleasure. This simulated brains would be more efficient if they were simpler, so the AI reduces them as far as it can, removing memory, language, and strip the brain just to the “pleasure centers”. If the AI is wrong about what pleasure means, and which physical processes generate pleasure, the universe will be not be filled with pleasure, but with “processes that are unconscious and completely worthless—the equivalent of a smiley-face sticker xeroxed trillions upon trillions of times and plastered across the galaxies. – Nick Bostrom

~

So let’s say we be super careful about giving the goal to the AI and give it some super nice goal. We keep the AI’s capabilities limited, slowly increasing them, in each step making ourselves sure AI is not a threat by testing the AI behavior in some kind of “sandbox” controlled safe environment. As AI becomes more capable, becomes used in many domains of economy, makes less mistakes, and becomes more safe. At this point, any remaining “alarmist” would have several strikes against them:

  • A history of alarmists predicting harm from the growing capabilities of robotic systems and being repeatedly proven wrong.
  • A clear empirical trend: the smarter the AI, the safer and more reliable it has been.
  • Large and growing industries with vested interests in robotics and machine intelligence.
  • A promising new technique in artificial intelligence, which is tremendously exciting to those who have participated in or followed the research.
  • A careful evaluation of seed AI in a sandbox environment, showing that it is behaving cooperatively and showing good judgment.

So we let the AI into the wild. It behaves nicely at first, but after a while, it start’s to change it’s environment to achieve it’s final goals. The AI, being better at strategizing than humans, behaved cooperatively while weaker, and it started to act on it’s final goals only when it became strong enough it knew we couldn’t stop it. It’s not that this problem can’t be solved, it’s just that “each time we hear of a seemingly foolproof security design that has an unexpected flaw, we should prick up our ears. These occasions grace us with the opportunity to abandon a life of overconfidence” – Nick Bostrom

~

Imagine three scenarios:

  1. Peace
  2. Nuclear war kills 99% of the worlds population.
  3. Nuclear war kills 100%.

Obviously, we prefer 1 over 2 over 3. How big is the difference between these scenarios? The difference in the amount of people killed is very large between scenario 1 and scenario 2, but not so large between scenario 2 and 3.  More important is the difference in how bad these scenarios are. Here, the difference between 2 and 3 is much larger than difference between 1 and 2 – because if 3 comes to pass, it’s not only the present people that are killed, but also the whole future is destroyed. To put this into perspective, imagine what a shame it would be if we went extinct, for example, a 1000 years ago. Also, counting all human lives across space and time, much more lives, friendships, loves, and experiences in general would be lost in case of scenario 3, than in case of scenario 2. Even if we don’t leave Earth the total number of people to exist in the future could be high as 10^16, if 1 billion people lived on earth at a time, each person a 100 years, for a total of next 1 billion years. An argument can be made that reducing the probability of extinction by 0.0001% could be more valuable than lives of all people living on Earth today. The values become even more mind-boggling if we consider the number of 10^43 of lives, mentioned earlier in this post.

~

The problem of expressing human values in a programming language and placing them into AI is extremely hard. Also, it is arguably the most important problem in the world today. Also, we don’t know how to solve it. But that is not the end. In addition to that, human brains are really bad at thinking about this kinds of things. Just a couple of examples:

  • We like to reason from past examples. We don’t have past examples of greater-than-human AI, or any extinction event, so we underestimate it’s probability.
  • We think “we knew it all along” even if we didn’t know it all along, and in line with that, we think past catastrophes were more predictable than they actually were. We actually can’t predict as well as we think.
  • It’s hard for us to change our opinions. For people like me, who have formed an opinion that technology is generally good, and anti-technology people in general have had bad arguments in the past, it’s hard to hear about AI risks and take them seriously.
  • We have trouble with big numbers, for example, it’s the same to us if 2,000, 20,000 or 200,000 of birds get saved from drowning in oil ponds. The numbers involved in the future of humanity are extremely large.
  • We can measure the cost of preparing for catastrophes, but we can’t measure the benefits, so we focus on costs more. “History books do not account for heroic
    preventive measures.” – Nassim Taleb

~

This is just the beginning. You can read about the rest of the problems, and proposed solutions to those problems, in Bostrom’s Superintelligence book. The problem is harder than it looks, we don’t know how to solve it, and if we don’t solve it we will go extinct.

The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else. – Eliezer Yudkowsky

Human brains, if they contain information relevant to the AI’s goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format. – Nick Bostrom

Advertisements

Psychology of Computer Programming: Conquering the Imposter Syndrome

Recently I came across a lot of posts about psychological problems related to computer programming. As I read them, I realized those were the same problems I had. The more I read, the more I searched for solutions. This post is a travelogue of my discoveries. Just to warn you, there are a lot of links in the text ahead, and you don’t need to click on every one to understand the text (beware of tab explosion). If you were to open any links at all, first I would recommend “How to make mistakes”, a short essay by Daniel Dennett.

The problem

Two things are going on that are literally driving programmers crazy. One is something known as the “imposter syndrome.” That’s when you’re pretty sure that all the other coders you work with are smarter, more talented and more skilled than you are. You live in fear that people will discover that you are really faking your smarts or skills or accomplishments. The trap of imposter’s syndrome is that programmers think they need to work harder to become good enough. That means spending more time coding — every waking minute — and taking on an increasing number of projects. That feeling is called the “Real Programmer” syndrome as named by a post that went crazy on Reddit last week. The Real Programmer lives only to code. – The Stress Of Being A Computer Programmer Is Literally Driving Many Of Them Crazy

He said: “Deep down know I’m ok. Programming since 13, graduated top of CS degree, got into Microsoft – but I feel like I’m an imposter.” I told him, straight up: You Are Not Alone. – I’m a phony. Are you?

From the outside, it would appear I was on the textbook path of programming. Started making websites at 15. Took programming and web design classes in my tech-oriented high school. Was accepted by my first choice school and majored in Computer Engineering. Had great internships at a tech giant. Wrote code that was used by millions of people. Graduated with distinction. Cofounded a software startup. And yet despite doing everything right, I didn’t think of myself as a good programmer. Impostor Syndrome instilled in me a deep fear of failing. I was afraid to speak up or ask questions for fear of saying something stupid, and people would find out I didn’t really know my stuff. – Overcoming Impostor Syndrome

Not long ago one of our programmers just lost it and he lost it good. He walked into the manager’s office and began screaming strange things. If I didn’t know him as well as I did I would have thought that he was on some kind of drug. But what had really happened was nothing short of a complete mental breakdown. – I Knew a Programmer that Went Completely Insane

What is a Real Programmer, you might ask? A Real Programmer is someone who loves programming! They love it so much that it’s what they spend all their time doing. In fact, a Real Programmer loves programming so much that they’re happy just to have the chance to do it. Paying them is just a formality because the Real Programmer doesn’t really consider it “work”. (…) It permeates the industry’s culture. You hear it from fellow programmers, managers, and investors. If you want to succeed as a programmer you have to at least look like a Real Programmer even if you’re not one at heart. So you get people working evenings and weekends just for appearances and they start to burnout. – IT Professional absolutely nails the “Real Programmer” mindset that is so pervasive in IT workplaces

Yet, like plenty of other fellow programmers, I feel completely worthless. It does not come a day where the impostor syndrome makes me feel that all I have managed to achieve is the result of simple luck. – I don’t want to be a Real Programmer

Branching out into other fields, having hobbies other than programming can be a tremendous benefit to your day job. You don’t need to burn a bazillion hours writing code. Burn that time writing, or reading, or arguing with someone over coffee (or your favorite scotch!). Burn that time running, or lifting, or both. Don’t burn yourself out to be a better programmer. Do what you love, and love many things. You will be better for it. – How to be a sane programmer

The type of thinking about the need to work long hours is dangerous because it is deceiving and can end up killing you. The drive for perfection is unfortunately a journey to insanity. Plain and simple, constantly tweaking something without delivering is a developer’s pit of despair. When I first arrived at New Relic I felt, like most do (at least that’s what I tell myself to help me sleep at night), that I was drinking from a fire-hose. I was intimidated by all of the amazing talent that is here, surrounded by experts in their fields, polyglots and so on. – Nerd Life Balance Part 2: Behaviors That Destroy the Balance

Solutions

(the next two sections are taken from Google I/O 2009 – The Myth of the Genius Programmer)

There is no genius programmer

“Can you guys please give Subversion on Google Code the ability to hide specific branches?” “Can you guys make it possible to create open source projects that start out hidden to the world, then get ‘revealed’ when they’re ready?” “Hi, I want to rewrite all my code from scratch, can you please wipe all the history?”

So what do these all have in common? There’s a lot of insecurity going on, right? This is a common feeling that we all have. We’re actually getting these responses last year at I/O. People were coming up to me and saying these sort of things. So it got us thinking “Well, what’s going on with psychology here? What’s going on in people’s heads? Why do they want to hide their code so much? What’s really at the bottom of this?”

“A pervasive elitism hovers in the background of collaborative software development: everyone secretly wants to be seen as a genius.”

This is rooted out of a general desire to not look stupid. Everybody, I think, wants to look like a smart developer. I know I certainly do, to some extent. There’s a lot of different reasons behind why people do this, and we’re going to start with something seemingly unrelated. Why do people buy products endorsed by celebrities? Michelle Obama wore this dress to the Inauguration. Boom, suddenly, it sold out. Michael Jordan wears Nike shoes. Everyone wants to buy Nikes because they love Jordan or basketball. What’s really going on here? Do you actually believe that if you buy Air Jordans, you are going to be as good as Michael Jordan? There’s some nugget of human psychology going on here, where it’s in our instinct to find celebrities, find people to idolize and want to be like those people, and we sort of latch on to whatever simple behaviors or materialistic pieces that remind us of this celebrity or this behavior. That’s true in the world of programming as well. We have Linus Torvalds, to some extent, Bill Gates even. Guido here at Google — you know, I mean, he wrote Python himself, right? Not quite true, you know? Did Linus write Linux all by himself? Right. We have Kernighan and Pike and Kernighan and Ritchie. I mean, these guys don’t always deserve all the credit. They certainly deserve some of the credit. They’re the leaders, or they started something, but they’re mythologized. So the personas that they become are bigger than life, and, to some extent, rooted in a little nugget of truth or fact and a whole lot of myth. When we say “the myth of the genius programmer,” we’re talking about the myth of, “Hey, here’s a genius, and genius goes off in a cave and writes this brilliant thing and then reveals it to the world and, oh, my gosh, this person’s famous forever.” Reality is, that’s not really how it works at all. There are in fact, geniuses… they are so incredibly rare that it’s almost a meaningless term. That myth just isn’t true. So, the ultimate geek fantasy is to go off into your cave and work and type in code and then shock the world with your brilliant new invention. It’s a desire to be seen as a genius by your peers. But there’s a flip side to that too. It’s not just about, “I want to be a genius and shock the world.” It’s also, “I’m insecure.” And what I mean by that is, “I also don’t want people to see my mistakes. “All right, maybe I won’t be a genius. Maybe I won’t shock the world with my brilliance, but at least I don’t want them to see my trail of failures and mistakes, and I’m gonna cover my tracks. They want to be seen as being clever. Clever people don’t make mistakes. Right, exactly. So the result is people wind up working in a cave. A classic example of this is: how long will you drive around before asking for directions? It’s hard to admit that you’ve made mistakes sometimes, especially publicly. So that’s why we showed these quotes in the beginning with people saying “Can you erase my history? Can you hide my project until it’s perfect?”

Fail fast

Think about the way you interact with your compiler. You have a really tight feedback. You write a function, you compile it, make sure it at least compiles. Maybe you write a UniTest if you’re doing great. But nobody sits down and writes thousands and thousands of lines of code and then runs their compiler for the first time. It just doesn’t happen.

I think a big issue also around failure is just natural human fear. You know, I can relate to this personally. I started learning banjo a few years ago, playing in bluegrass jams. And they would occasionally try to call on me to do banjo solos, which is really, really hard to learn, and I just wouldn’t do it. Someone took me aside and he said “You realize that 50% of learning to solo is just not caring how good you sound and just losing the fear.” It was totally true. I was like, “All right, these are my friends. If I sound terrible, who cares?” And sure enough he was absolutely right. I started playing really bad solos, but it got better and better, and I kept learning, and that was a huge step. So if you can just make that mental shift and say “It’s all right. I’m gonna fail, and it’s not a big deal.” No fear. That’s fine. You move on. You learn. Executive makes a bad business decision, and the company loses $10 million for the company. The next morning, comes into work. His secretary says “The CEO wants to see you in his office.” And the guy hangs his head down. He’s like “This is it. I’m gonna get fired.” Walks into the CEO’s office, and he’s like “So I guess you want my resignation.” The CEO looks at him and says, “Resignation? “I just spent $10 million training you. Why would I fire you?” I lived in Italy for three years. I moved there, and I had been studying Italian, and I was really proud to use my Italian. I went into a cafe, and I ordered a sandwich, and they give me this massive sandwich, and I wanted a knife to cut it with. So I thought I’d be cool and use my Italian, and I promptly asked them for a toothbrush to cut my sandwich. The guy just looked at me. And I’m like, “Toothbrush.” And he’s like, “No.” But I never made that mistake again. Speaking languages in a foreign country is very intimidating. You’re just so scared of looking like a fool, but you don’t learn otherwise. Well, it’s the easiest way to learn. That sort of hot-white fear you get going up your neck because you asked for something embarrassing. It’s not just about embracing failure, but it’s also failing fast. Iterating as quickly as we can. This is something we actually talk about a lot at Google, was don’t just fail, fail quickly and pick up and try something different as fast as you can. And that’s why we’ve got this Google Labs now where people are experimenting with different projects. And if they fail, that’s fine. They’ll just put something up or change it the next day and try it again. The faster you can fail and the faster you can iterate, the faster you will learn and get better. If you practice, it makes your iteration-failure cycle faster. And it’s less scary to fail, because you’ll tend to have smaller failures. This way, the failures tend to get smaller over time, and the successes tend to get larger, and that’s a trend you’ll see, especially if you’re learning as you fail fast.

(the next two sections are taken from EMF2012 – Programming is terrible)

False dichotomy of “good” and “bad” programmers

Many blogs claim to elcuidate a dichotomy of programmers – good and bad. Upon careful inspection, most of them turn out to actually dictate the following types:

A. Programmers who are like me.
B. Programmers who are not like me.

The assertion is that if you copy their personality (like a cargo cult), you too can be a successful programmer. Sometimes it is more veiled:

A. Programmers who use my favourite language
B. Programmers who do not use my favourite language

Or:

A. Programmers who share my political beliefs
B. Programmers who do not share my political beliefs

Why do we do this? It’s easy and gets blog hits. Everyone loves a simple answer to a complex problem. Especially when the two choices are emotionally charged. Better still, when the good programmers have magical super powers. You’ll hear terms like rockstar, ninja, founder, entrepreneur, all used in the same pre-pubecsent machoism that our industrying is drowning in. Unfortunately, it’s total bollocks. The ‘some programmers are crazily more productive than others’ comes from a study, on batch processing vs interactive programming, in 1960. On twelve people. In a half hour session. We’ve been repeating this myth endlessly. it’s destructive. it’s either repeated by idiots who believe they have nothing to learn from others, or repeated by learners to explain why they shouldn’t try to learn.

So, are there two types of programmers? Probably not, but if I was to try, i’d say:

A. Programmers who know they will make mistakes
B. Programmers who think they will not make mistakes

It’s OK to write ugly code

Write code as if it were mistaken, and you will have to change it, again and again. because you will. Fail fast and repeatedly. It is easier to get something right by getting wrong a couple of times. It is easier to get it wrong a couple of times if you don’t write so much code from the outset. Try and think a little more about how the code will be called than how it works. It is far easier to change implementation over interface. Don’t be an artist. Don’t labour over the ‘right’ way to do things, but don’t paint yourself into a corner. Write code that is easy to replace, rather than extend. Bear in mind: It is OK to write ugly code. As long as the things using it don’t have to write uglier code to use it. As you get further in programming, you will understand the biggest problems are social, not technical.

Mistakes are expected

The other day, I came to the conclusion that the act of writing software is actually antagonistic all on its own. Arcane languages, cryptic errors, mostly missing (or at best, scattered) documentation – it’s like someone is deliberately trying to screw with you, sitting in some Truman Show-like control room pointing and laughing behind the scenes. At some level, it’s masochistic, but we do it because it gives us an incredible opportunity to shape our world.

How to Make Mistakes

Making mistakes is the key to making progress. There are times, of course, when it is important not to make any mistakes–ask any surgeon or airline pilot. But it is less widely appreciated that there are also times when making mistakes is the secret of success. What I have in mind is not just the familiar wisdom of nothing ventured, nothing gained. While that maxim encourages a healthy attitude towards risk, it doesn’t point to the positive benefits of not just risking mistakes, but actually of making them. Instead of shunning mistakes, I claim, you should cultivate the habit of making them. Instead of turning away in denial when you make a mistake, you should become a connoisseur of your own mistakes, turning them over in your mind as if they were works of art, which in a way they are. You should seek out opportunities to make grand mistakes, just so you can then recover from them.

There are no things every developer needs to know

For example, take this: The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) Yes, but what about those developers who don’t need to know a thing about character sets, like people who do numerical analysis, machine learning, or work with graphics, or work in a hundred other areas which don’t deal with character sets? It’s better to know something about unicode than not to know, but it simply is not the case every developer needs to know about unicode. The real reason people write title’s like that is because they generate traffic. There are generally two reasons to learn something: either it’s useful to you for the problem you are solving right now, or you are interested in it just for the sake of it. The problem with learning for any other reason is: the brain forgets things. If you are learning a thing with an idea “maybe I will need this someday” the probability you are going to forget it is high. Then, I hear a lot of talk about functional programming, vi, emacs, arch linux, etc. There’s also a theory that says knowing functional programming is going to make you a better programmer. It’s not clear if that theory is true, and evidence in favor of it is purely anecdotal. It may be the case that better programmers are more drawn to learning about functional programming (so, in technical language: it’s just selection bias and signalling) At least for me, learning functional programming did not make me a better programmer. Learning how to fail fast – did.

Perfectionism will kill you

If you are having problems with perfectionism, ask yourself the following:

  1. Is this really important or not? How important will it be in a year? In ten years? If it’s not that important, dont’ act like it is – because it’s not. Perfectionism is a tendency to overcommit, when what you really should be doing is optimizing the level of your commitment.
  2. Is there anything good in work that’s not done perfectly? Most things can’t be divided into absolute categories. For example, is the floor of your room perfectly clean? It’s not, there is always a degree of cleanliness, and the threshold for cleaning your room is not “zero dust”, if it were, then you would be cleaning your room all the time. What is the threshold after which the thing you are working on is good enough? Do not maximize in situations in which you should satisfice. It’s important to know the difference.
  3. Is perfectionism something which could be achieved? Let’s say someone offers you a bet: they’ll pay you 10 dollars if the six-sided dice rolls anything but 6, and you will pay them 10 dollars if it rolls a 6. And you lose. Does it make any sense to say “I shouldn’t have played that game”. Well, technically, you should have – it was the best decision to make with limited information you had, you didn’t know the future. In the same sense, a lottery player who, after seeing the winning combination, says “I should have played that combination” is wrong – in fact, he shouldn’t have played the lottery at all because expected gains are negative. There is no sense in saying the result should be better than it is if that would imply you being omniscient and omnipotent.

Inaction hurts more than action

Consider this scenario.You own shares in Company A. During the past year you considered switching to stock in Company B but decided against it. You now find that you would have been better off by $1200 if you had switched to the stock of Company B. You also owned shares in Company C. During the past year you switched to stock in Company D. You now find out that you’d have been better off by $1200 if you kept your stock in Company C.Which error causes you more regret? Studies show that about nine out of ten people expect to feel more regret when they foolishly switch stocks than when they foolishly fail to switch stocks, because most people think they will regret foolish actions more than foolish inactions. But studies also show that nineout of ten people are wrong. Indeed, in the long run, people of every age and in every walk of life seem to regret not having done things much more than they regret things they did, which is why the most popular regrets include not going to college, not grasping profitable business opportunities, and not spending enough time with family and friends. – Daniel Gilbert, Stumbling on Happiness

Learning how to program

This is one of those things that I always try to communicate, especially to students who are just starting out in computer science is that software, even though it’s fun to write code alone, you know, late at night in your basement, whatever, actually writing software that’s successful– it’s an inherently collaborative activity. And it actually forces you to deal with people and talk with people, and that’s why we encourage people to get involved in Open Source, because it’s sort of like, “Okay, well, maybe you’re still in college, but here’s your chance to actually work with people and work on a team and see what it’s gonna be like. I mean, one of the things I always ask people is, “Can you name a piece of software “that’s really successful, “really widely used by a lot of people, and was written by one person?” Fitzpatrick: And before anybody yells out Metafont, that’s not widely used, okay? But anyway, so this is a trap, okay? Of this sort of wanting to be a genius. – Ben Collins-Sussman

Some useful links you can go to (but you don’t need to, as indeed I didn’t use them all, it’s just a list my friends and I came up with):

“We now know a thousand ways not to build a light bulb” – Thomas Edison

“An expert is a man who has made all the mistakes which can be made, in a narrow field.” – Niels Bohr

The Search for Truth and Aesthetics of Pessimism

1b93c-adamandevefastingandhungerNow the Lord God had planted a garden in the east, in Eden; and there he put the man he had formed. The Lord God made all kinds of trees grow out of the ground—trees that were pleasing to the eye and good for food. In the middle of the garden were the tree of life and the tree of the knowledge of good and evil. (…) The Lord God took the man and put him in the Garden of Eden to work it and take care of it. And the Lord God commanded the man, “You are free to eat from any tree in the garden; but you must not eat from the tree of the knowledge of good and evil, for when you eat from it you will certainly die.” (…) Now the serpent was more crafty than any of the wild animals the Lord God had made. He said to the woman, “Did God really say, ‘You must not eat from any tree in the garden’?” The woman said to the serpent, “We may eat fruit from the trees in the garden, but God did say, ‘You must not eat fruit from the tree that is in the middle of the garden, and you must not touch it, or you will die.’” “You will not certainly die,” the serpent said to the woman. “For God knows that when you eat from it your eyes will be opened, and you will be like God, knowing good and evil.” When the woman saw that the fruit of the tree was good for food and pleasing to the eye, and also desirable for gaining wisdom, she took some and ate it. She also gave some to her husband, who was with her, and he ate it. Then the eyes of both of them were opened, and they realized they were naked;so they sewed fig leaves together and made coverings for themselves. – Bible, Genesis

There’s something strange and dark about knowledge, wisdom and truth, and this darkness has been the subject of many ancient myths and legends. Has this theme any basis in fact? One thing we know for sure: truth hurts. Let’s begin with a trivial example:

Suppose that you started off in life with a wandering mind and were punished a few times for failing to respond to official letters. As a result, you would be less effective than average at responding, so you got punished a few more times. Henceforth, when you received a bill, you got the pain before you even opened it, and it laid unpaid on the mantelpiece until a Big Bad Red late payment notice with an $25 fine arrived. More negative conditioning. Now even thinking about a bill, form or letter invokes the flinch response. The idea is simple: if a person receives constant negative conditioning via unhappy thoughts whenever their mind goes into a certain zone of thought, they will begin to develop a psychological flinch mechanism around the thought. The “Unhappy Thing” — the source of negative thoughts — is typically some part of your model of the world that relates to bad things being likely to happen to you. – Less Wrong: Ugh fields

The expression “harsh truth” is so familiar (try googling it) even cracked.com is talking about the 6 harsh truths that will make you a better person:

The human mind is a miracle, and you will never see it spring more beautifully into action than when it is fighting against evidence that it needs to change. Your psyche is equipped with layer after layer of defense mechanisms designed to shoot down anything that might keep things from staying exactly where they are — ask any addict. – 6 Harsh Truths That Will Make You a Better Person

Not only that, but there are scientific studies on something called optimism bias:

The optimistic bias is seen in a number of situations. For example: people believing that they are less at risk of being a crime victim, smokers believing that they are less likely to contract lung cancer or disease than other smokers, first-time bungee jumpers believing that they are less at risk of an injury than other jumpers, or traders who think they are less exposed to losses in the markets.

And self-serving bias:

When individuals reject the validity of negative feedback, focus on their strengths and achievements but overlook their faults and failures, or take more responsibility for their group’s work than they give to other members, they are protecting the ego from threat and injury. These cognitive and perceptual tendencies perpetuate illusions and error, but they also serve the self’s need for esteem. For example, a student who attributes earning a good grade on an exam to their own intelligence and preparation but attributes earning a poor grade to the teacher’s poor teaching ability or unfair test questions is exhibiting the self-serving bias.

Then we have the phenomenon of euphemisms:

It is obvious that the purpose of using euphemisms is to avoid something unpleasant or offensive. They come from psychological needs. Psychologically, if not linguistically, meanings can be defined by the sum of our responses to a word or an object. Words themselves may be seen as responses to stimuli. After a word has been associated for a long period of time with the stimuli that provokes it, the word itself picks up aspects of the response elicited by the stimuli object. When unpleasant elements of response attach themselves strongly to the word used to describe them, we tend to substitute another word free of these negative associations. In this way, psychologists tell us, euphemisms are formed. – Cultural Concepts and Psychological Tendencies in Euphemisms

george_carlinSometime during my life toilet paper became bathroom tissue… Sneakers became running
shoes. False teeth became dental appliances. Medicine became medication. Information became directory assistance. The dump became the landfill. Car crashes became automobile accidents. Partly cloudy became partly sunny. Motels became motor lodges. House trailers became mobile homes. Used cars became previously owned transportation. Room service became guest room dining. Constipation became occasional irregularity. (…) The CIA doesn’t kill anybody anymore. They neutralize people. Or they depopulate the area. The government doesn’t lie. It engages in disinformation. Poor people used to live in slums. Now ‘the economically disadvantaged’ occupy ‘substandard housing’ in the ‘inner cities.’ And a lot of them are broke. They don’t have ‘negative cash flow.’ They’re broke! Because many of them were fired. In other words, management wanted to ‘curtail redundancies in the human resources area,’ and so, many workers are no longer ‘viable members of the workforce.’ – George Carlin on Euphemistic Language

It looks like humans believe whatever they want to believe and ignore beliefs which are frightening and negative. This has obvious implications for our personal lives, but what about the big questions? The cognitive algortihm here seems to be the same, for example, take atheism:

530px-PaleBlueDot
Our posturings, our imagined self-importance, the delusion that we have some privileged position in the Universe, are challenged by this point of pale light. Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity, in all this vastness, there is no hint that help will come from elsewhere to save us from ourselves. The Earth is the only world known so far to harbor life. There is nowhere else, at least in the near future, to which our species could migrate. Visit, yes. Settle, not yet. Like it or not, for the moment the Earth is where we make our stand. It has been said that astronomy is a humbling and character-building experience. There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world. To me, it underscores our responsibility to deal more kindly with one another, and to preserve and cherish the pale blue dot, the only home we’ve ever known. – Carl Sagan, Pale Blue Dot

Copernicus, Kepler, and Galilei have shown us that we are not in the centre of the universe, Darwin has shown us we are not made in the image of god, and modern neuroscience is showing us today that free will, at least as we commonly conceptualize it, is an illusion.

The truth, indeed, is something that mankind, for some mysterious reason, instinctively dislikes. Every man who tries to tell it is unpopular, and even when, by the sheer strength of his case, he prevails, he is put down as a scoundrel. – H. L. Mencken

Men fear thought as they fear nothing else on earth, more than ruin, more even than death. Thought is subversive and revolutionary, destructive and terrible; thought is merciless to privilege, established institutions, and comfortable habit. Thought looks into the pit of hell and is not afraid. – Bertrand Russell

I write this to you, dear Elizabeth, only in order to counter the most usual proofs of believers. Every true faith is infallible. It performs what the believing person hopes to find in it. But it does not offer the least support for the establishing of an objective truth. Here, the ways of men divide. If you want to achieve peace of mind and happiness, have faith. If you want to be a disciple of truth, than search. – Nietzsche, Letter to his sister

Searching for truth, we tend not to search there where it hurts the most, as is demonstrated by confirmation bias:

Confirmation bias is the tendency of people to favor information that confirms their beliefs or hypotheses. People display this bias when they gather or remember information selectively, or when they interpret it in a biased way. The effect is stronger for emotionally charged issues and for deeply entrenched beliefs. People also tend to interpret ambiguous evidence as supporting their existing position.

The myth of Prometheus, can be interpreted metaphorically not as the fire-bringer, but as the truth-bringer:

proCSFor Boccaccio, “In the heavens where all is clarity and truth, Prometheus steals, so to speak, a ray of the divine wisdom from God himself, source of all Science, supreme Light of every man.” With this, Boccaccio shows himself moving from the mediaeval sources with a shift of accent towards the attitude of the Renaissance humanists. Using a similar interpretation to that of Boccaccio, Marsilio Ficino in the fifteenth century updated the philosophical and more somber reception of the Prometheus myth not seen since the time of Plotinus. In his book written in 1476-77 titled Quaestiones Quinque de Mente, Ficino indicates his preference for reading the Prometheus myth as an image of the human soul seeking to obtain supreme truth. As Olga Raggio summarizes Ficino’s text, “The torture of Prometheus is the torment brought by reason itself to man, who is made by it many times more unhappy than the brutes. It is after having stolen one beam of the celestial light […] that the soul feels as if fastened by chains and […] only death can release her bonds and carry her to the source of all knowledge.” (…) … Mary Shelley’s 1818 novel Frankenstein is subtitled “The Modern Prometheus”, in reference to the novel’s themes of the over-reaching of modern humanity into dangerous areas of knowledge.

Speaking of dangerous knowledge, there is a BBC documentary of the same name:

The film begins with Georg Cantor, the great mathematician whose work proved to be the foundation for much of the 20th-century mathematics. He believed he was God’s messenger and was eventually driven insane trying to prove his theories of infinity. Ludwig Boltzmann’s struggle to prove the existence of atoms and probability eventually drove him to suicide. Kurt Gödel, the introverted confidant of Einstein, proved that there would always be problems which were outside human logic. His life ended in a sanatorium where he starved himself to death.

Those matematicians may have been the inpiration for Darren Aronofsky and his film Pi:

Personal note: When I was a little kid my mother told me not to stare into the sun, so once when I was six, I did. At first the brightness was overwhelming, but I had seen that before. I kept looking, forcing myself not to blink, and then the brightness began to dissolve. My pupils shrunk to pinholes and everything came into focus and for a moment I understood.

pi_01Restate my assumptions: One, Mathematics is the language of nature. Two, Everything around us can be represented and understood through numbers. Three: If you graph the numbers of any system, patterns emerge. Therefore, there are patterns everywhere in nature. Evidence: The cycling of disease epidemics;the wax and wane of caribou populations; sun spot cycles; the rise and fall of the Nile. So, what about the stock market? The universe of numbers that represents the global economy. Millions of hands at work, billions of minds. A vast network, screaming with life. An organism. A natural organism. My hypothesis: Within the stock market, there is a pattern as well… Right in front of me… hiding behind the numbers. Always has been.

Hero from the film Pi “stared at the sun” for too long and ended in a tragic way. This reminds of another great myth:

daedalus-and-icarus
Often depicted in art, Icarus and his father attempt to escape from Crete by means of wings that his father constructed from feathers and wax. Icarus’ father warns him first of complacency and then of hubris, asking that he fly neither too low nor too high, because the sea’s dampness would clog or the sun’s heat would melt his toes. Icarus ignored instructions not to fly too close to the sun, and the melting wax caused him to fall into the sea where he drowned.

It also reminds of another individual who stared for too long:

He who fights with monsters should look to it that he himself does not become a monster. And when you gaze long into an abyss the abyss also gazes into you. – Nietzsche

We all know the Aesop’s fable of The Ant and the Grasshopper:

The fable concerns a grasshopper that has spent the warm months singing while the ant (or ants in some versions) worked to store up food for winter. When that season arrives, the grasshopper finds itself dying of hunger and begs the ant for food. To its reply when asked that it had sung all summer, it is rebuked for its idleness and advised to dance during the winter.

What does this have to do with anything? The ant took responsibility for his life, it was tough to work but he took reality seriously, while the grasshopper was living in the fantasy world. As in the Matrix, the ant took the red pill.

You take the blue pill – the story ends, you wake up in your bed and believe whatever you want to believe. You take the red pill – you stay in Wonderland and I show you how deep the rabbit-hole goes.

How long would the naive idealistic grasshopper survive amongst the hard-core cowboys on the western frontier?

A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects. – Robert Heinlein

Although Heinlein is speaking against the insects, he and the ant are here on the same side – you need to be tough and resiliant, and take the truth even when it hurts.

Despite our squeamishness about cultural stereotypes, there are tons of studies out there showing marked and quantifiable differences between Chinese and Westerners when it comes to parenting. In one study of 50 Western American mothers and 48 Chinese immigrant mothers, almost 70% of the Western mothers said either that “stressing academic success is not good for children” or that “parents need to foster the idea that learning is fun.” By contrast, roughly 0% of the Chinese mothers felt the same way. Instead, the vast majority of the Chinese mothers said that they believe their children can be “the best” students, that “academic achievement reflects successful parenting,” and that if children did not excel at school then there was “a problem” and parents “were not doing their job.” Other studies indicate that compared to Western parents, Chinese parents spend approximately 10 times as long every day drilling academic activities with their children. – Why Chinese Mothers Are Superior

Chinese are the ants while Westerners are grasshoppers. This approach to parenting is also known as “tough love”.

Man was, and is, too shallow and cowarldy to endure the tragic, divine-comedy of life. Upon looking into the Abyss, man becomes afraid. Unable to face the truth, he hides it from himself. Idealism is cowardice. Most men are unwilling to take responsibility for their own lives. They use Utopian Ideals to wait for a future “heaven on earth” to escape living. It is the strong who are pessimistic, they know man, and know that no Ideal or Ideology will ever change human nature. – Unknown, inspired by Oswald Spengler

Oswald_Spengler
The question of whether world peace will ever be possible can only be answered by someone familiar with world history. To be familiar with world history means, however, to know human beings as they have been and always will be. There is a vast difference, which most people will never comprehend, between viewing future history as it will be and viewing it as one might like it to be. Peace is a desire, war is a fact; and history has never paid heed to human desires and ideals… – Oswald Spengler

This is the aesthetics of pessimism: pessimism is bravery, idealism is cowardice. Pessimism is seen as a good in and of itself.

250px-Schopenhauer
The development of the intellect will at last extinguish the will to reproduce, and will at last achieve the extinction of the race. Nothing could form a finer denouement to the insane tragedy of the restless will. Why should the curtain that has just fallen on defeat and death, always rise again upon a new life, a new struggle, and a new defeat? How long shall we be lured into this much ado about nothing, this endless pain that leads only to a painful end? When shall we have the courage to fling defiance into the face of the will? To tell it that the loveliness of life is a lie and that the greatest boon of all is death. – Arthur Schopenhauer

Or, as the modern mass media Schopenhauer put it:

0
I think human consciousness, is a tragic misstep in evolution. We became too self-aware, nature created an aspect of nature separate from itself, we are creatures that should not exist by natural law. We are things that labor under the illusion of having a self; an accretion of sensory, experience and feeling, programmed with total assurance that we are each somebody, when in fact everybody is nobody. Maybe the honorable thing for our species to do is deny our programming, stop reproducing, walk hand in hand into extinction, one last midnight, brothers and sisters opting out of a raw deal. – Rust Cohle, True Detective

Truth is hard, so pessimistic beliefs became a signal of intellectual honesty, toughness, and wisdom. If you want to convey an image of a truth seeker, pessimism seems to be the way to go. Thus, pessimism bias is created. Pessimistic beliefs are rejected out of hand, because the pessimist is taken to be just another “truth seeker. This is a problem, because many pessimistic beliefs seem to be true.

If you are a pessimist, ask yourself is it because pessimism is rational or because you see pessimism as more noble than optimism.