Structure of disagreements

How many communicating civilizations are there in the visible universe? Alice thinks there is lot. Bob thinks we are the only one. Why? Things you can take into account are:

  • How many stars are there?
  • Of those, how many have planets?
  • Of those, how many planets on average can potentially support life?
  • Of those, on how many does life actually develop?
  • Of those, how many are intelligent enough to develop civilization?
  • Of those, how many are communicating through space?
  • How long are they sending the signals out?

This gives us the Drake equation. There is a lot of different ways to disagree about this question. Even if Alice thinks life develops on just 0.001 potential planets and Bob thinks it’s 0.005, Bob may as well think we are the only civilization and Alice may think there are a lot of civilizations out there.

Even if the models they have become more similar, with Alice and Bob both moving to 0.002, Alice has become even more sure of there being a lot of civilizations and Bob became even more sure we are the only one.

The fundamental reason for disagreements is they have different models of the world. If they could go parameter by parameter and agree on each, they would eventually come to an agreement. They could take the last one (of all planets which have developed life, how many develop civilization) and decompose that into some plausible steps which are needed for intelligence to develop – development of cells, cell nucleus (eukaryotes), sexual reproduction, multicellular life, big brains in tool-using animals, etc. (examples taken from Robin Hanson) That will sure give them a lot of things to talk about. When they solve all that, they can move on to the next part they disagree about.

The problem of course being, disagreements in real life aren’t that simple. In real life you don’t have an equation which two people insert numbers into so you can inspect the numbers and see which ones are different. In real life it’s hard to know what the root cause of the disagreement is. If only we had such models, just imagine how easier would our life be! Luckily, you can build such models. You can do it on the fly in the middle of a conversation and in most cases it takes just a few seconds. And I’m not just talking just about Fermi estimates. Let’s call this more general practice quick modeling.

First you need to do quick modeling of your own reasons for believing things and explain it to the other person. The better you are at quick modeling the better you can explain yourself. It would be great if the other person did the same, but if they don’t, we can do it for them. If you do it in a polite way, to the other person it will look like you are simply trying to understand them, which you in fact honestly are trying to do. For example, Bob thinks Trump is better than Hillary is on the question of immigration and you disagree about that. There are different reasons to be concerned about immigration and when Bob tells you his reasons, you can try to put what he said in simpler terms using some basic model. Just ask yourself for starters:

  • What are, in fact, the things we are talking about? In case of immigration they could be: the president, laws, immigrants, natives, the economy, crime, stability, culture, terrorism, etc.
  • What are the causal connections between things we are talking about? In case of immigration, for example some people may think immigrants influence the economy positively, some negatively.
  • What is the most important factor here? Of all the things listed, what is the most important factor to Bob? Is there some underlying reason for that? What does Bob in general think is important?

The goal is to take what the other person is saying in terms of intuitions, analogies and metaphors and transform it into a model. In the previous example, you can think of it as a Drake equation for immigration. Imagine the world 5 years from now, in one Trump makes decisions on immigration (world T), in one Hillary does (world H). Since the topic is only immigration, don’t imagine the whole presidency, just limit yourself to keeping everything the same and changing only the parameter under discussion. Which one is better, i.e. what is the difference in utility between those two worlds?

Bob told you the reason why he thinks what he thinks and you built a quick model of it. The next step is to present that model to the other person to see if you understood the other person correctly. If you get it wrong, you can work together to make it more accurate. When he agrees that you got the basic model right, that’s the first step, you understand a core of his model. He maybe didn’t think about the topic in that terms before, the model may have been implicit in his head, but now you have it in some explicit form. The first model you build will be oversimplified and Bob may add something to it, and you should also try to find other important things which should be added. Take the most important part you disagree about and decompose it further. When you solve all important sub-disagreements, you are done.

Let’s take a harder case. Alice voted for Trump because he will shake things up a bit. How to build a model from that? First step: what are we talking about? The world is currently in state X and after Trump it will be in state Y. In the process, the thing which will be shaken up is the US government, which affects the state we find ourselves in, and Alice thinks Y will be better than X. What sort of things get better when shaken up? If you roll a six sided dice and get 1, doing it again will probably get you a larger number. So if you think things are going really terrible right now, you will agree with Alice. (modeling with prospect theory gives the same result) What gets worse when shaken up? Systems which are in some good state. Also you may noticed that most systems get worse when shaken up, especially large complex systems with lots of parameters (high dimensionality) because there is more ways to do something wrong than to do it right. On the other hand, if the systems are intelligent and self-correcting, such systems sometimes end up in local optimums, so if you shake them up you can end up in a better state. To which degree does the US government have that property? What kind of system is that anyway? May it be the case that some parts of the US government will become better when shaken up and other become worse when shaken up? The better you are at quick modeling the better you can understand what the other person is saying.

When you notice the model has become too complex or you have gotten in too deep, you can simply return to the beginning of the disagreement, think a bit about what the next most important thing would be, and try that route. You can think of the disagreement as a tree:

tree

At root of the tree you have the original disagreement, the nodes below (B and C in the picture) are the things upon which the disagreement depends on (e.g. parameters in the Drake equation), and so on further, what you think about B depends on D and E, etc. You can disagree about many of those, but some are more important than others. One thing you need to be aware of is going too deep too fast. In the Drake equation there are 7 parameters and you may disagree right at the start at the first one, the number of stars. That parameter may depend on other 5 things and you may disagree on the first one of those, etc. Two hours later you may resolve your disagreement on the first parameter, but when you come to the second parameter you realize that the first disagreement was, in relative terms, insignificant. That’s why you should first build a simple first approximation of the whole model, and only after that go decomposing the model further. Don’t interrupt the other speaker so you could dive into a digression. Only after you have heard all of the basic parts you have enough information to know which parts are important enough to decompose them further. Avoid unnecessary digressions. The same principle holds not only for the listener but also for the speaker who should try to keep the first version of the model as simple as possible. When someone is digressing too much it may be hard to politely ask them to speed up, but it can often be done, especially with when debating with friends.

In some cases, there may be just one such part of the equation which dominates the whole thing, and you may disagree on that part, which reduces the disagreement about A to a disagreement about B. Consider yourself lucky, that’s a rare find, and you just simplified your disagreement. In that case, you can call B a double crux.

Applying the quick modeling technique often with other people will reveal the fact that models of the world people have can be very complex. It may take you a lot of time to come to an agreement and maybe you simply don’t have the time. Or you may need many conversations about different parts of the whole model and resolve each part separately before coming to full agreement. Some people are not just not interested in discussions on some topics, some people are not intellectually honest etc. – the usual limitations apply.

Things which may help:

  • Think of concrete examples. Simulate in your head “how would the world look like if X”. That can help you with decomposition, if you run a mental simulation you will see what things consist of, what is connected, what the mechanisms are.
  • Fermi estimates. Just thinking about how you would estimate a thing will get you into a modeling state of mind, and putting the actual numbers in will give you a sense the relative importance of things.
  • Ask yourself why do you believe something is true, and ask the same about the other person. You can say to them in a humble tone of voice “… that’s interesting, how did you reach that conclusion?” It’s important for you to actually want to know how did they reach that conclusion, which in fact you will want to know if you are doing quick modeling.
  • Simplify. When things get too complex, fix the value of some variable. This one is identical to advice from LessWrong so I will borrow an example from them: it’s often easier to answer questions like “How much of our next $10,000 should we spend on research, as opposed to advertising?” than to answer “Which is more important right now, research or advertising?” The other way of simplifying is to change just one value while holding other things constant, what economists call ceteris paribus. Simulate how the world would look like if just one variable changes.
  • Think about what the edge cases are, but pay attention to them only when they are important. Sometimes they are, mostly they are not. So, ignore them whenever you can. The other person is almost always talking about the average standard case and if you don’t agree about what happens on average there is no sense to discuss weird edge cases.
  • In complex topics: think about inferential distance. To take the example from LessWrong: explaining the evidence for the theory of evolution to a physicist would be easy; even if the physicist didn’t already know about evolution, they would understand the concepts of evidence, Occam’s razor, [etc… while] explaining the evidence for the theory of evolution to someone without a science background would be much harder. There may be some fundamental concepts which the other person is using and you are not familiar with, ask about that, think about what you may learn from the other person. Also, try to notice if the other person doesn’t understand some basic concept you are using and try to, in a polite non-condescending way, clarify what you mean.
  • In complex topics: think about difference of values. If you think vanilla tastes better than chocolate and the other person disagrees, that’s a difference of values. You should separate that from the model of how the external world works, and focus on talking about the world first. In most cases it makes no sense to talk about values of different outcomes when you disagree even about what the outcome will be. What sometimes looks like a difference of values is often connected to what you think about how the world works. Talk about values only if you seem to completely agree about all of the relevant aspects of the external-world-model. When talking about values, also do quick modeling, as usual.
  • Practice.
  • And most important for last: become better at quick modeling in general.

There was a post on LessWrong about a method called Double Crux for resolving disagreements but I think quick modeling is a better more general method. Also, in the Double Crux blog post some things were mentioned which I think you should not worry about:

  • Epistemic humility, good faith, confidence in the existence of objective truth, curiosity and/or a desire to uncover truth, etc. It is not usually the case those are the problem. If at some point in the discussion it turns out some of those are a problem it will be crystal clear to you and you can just stop the discussion right there, but don’t worry about that beforehand.
  • Noticing of subtle tastes, focusing and other resonance checks, etc. Instead on focusing on how your mind may be making errors and introducing subtle biases, do what scientists and engineers did for centuries and what works best: look at the world and build models of how the world works. When Elon Musk is designing batteries he doesn’t need to think about subtle biases of his mind, he needs to be thinking about the model of the battery and how to improve it. When bankers are determining interest rates they don’t need to think about how their minds do hyperbolic discounting, they can simply replace that with an exponential discounting. The same goes with disagreements, you need to build a model of what you think about the topic, of what the other person thinks, and decompose each sub-disagreement successfully.

Advantages over double crux:

  • You don’t need both persons being familiar with the method.
  • It works even when there is no double crux. In my experience most disagreements are too complex for existence of double cruxes.
  • There is no algorithm to memorize and you don’t need a whiteboard.

Not to mention quick modeling is useful not just for resolving disagreements but also for making better decisions and, obviously, forming more correct models of the world.

Simple Rationality

How to be rational? Model things explicitly, and become good at it. That’s it. To be able to build good models is not easy and requires some work, but the goal is simple.

When thinking about something, there are things to take into consideration. For example, if you are thinking about moving into a new apartment you may take into consideration the price of the current apartment, the price of the new apartment, how long your commute will be, do you like the neighborhood, how important each of those things is to you, etc. We can make a list of things to take into consideration, we can call that our ontology of the model. You can decompose a thing into smaller things, in similar way a mechanic would when he is repairing some equipment, or a physicist would do when they are thinking about systems. It’s not good to forget to think about things which are important. It’s also not good to think too much about things which are not important. So, there’s an optimal ontology which you should be aiming for. (math analogy: sets)

The next thing is to figure out which things are connected to other things. Of course, some of the connections will not be important so you don’t need to include them. (math analogy: graphs) It also may be important to note which things cause other things and which things just correlate with each other.

So, once you know which things are connected to each other, the next thing is to estimate how strong the effects are. If you climb to a tall mountain, the boiling point of water will be lower, and you draw a graph which has altitude at x axis and boiling point at y axis. If some things correlate, you can draw a graph in a similar way. (math analogy: functions)

Some things can’t be quantified that way, instead they can be either true or false. In that case, you can model the dependencies between them. (math analogy: logic)

For some things, you don’t know exactly how they depend on each other, so you need to model probabilities, if X is true, how probable is it that Y is true? (math analogy: basic probability theory)

Some things are more complicated, for example if you are building a model of what the temperature in your town will be 7 days from now, there are various possible outcomes, each of which has some probability. You can draw a graph with temperature on x axis and probability on y axis. (math analogy: probability distribution)

Let’s say you are making a decision and you can decide either A or B. You can imagine two different worlds, one in which you made decision A, other where you made decision B. How would the two worlds be different from each other? (math analogy: decision theory)

Making the decision model more realistic, for each decision there is a space of possibilities, and it’s important to know how that space looks like, which things are possible and which are not possible. Each of possibilities has some probability of happening. You can say for each decision there exists an probability distribution on the space of possibilities.

You can take into consideration the dynamics of things, how things change over time. (math analogy: mathematical analysis)

One more advanced example would be, for each thing in your ontology, it has a number of properties, let’s call that n. Each of those is a dimension, so the thing you are thinking about is a dot in an n-dimensional property-space. (linear algebra)

How many people build explicit models? Seems to me, not many. Except maybe in their professional life. Just by building an explicit model, once you understand the basic concepts and tools of how to do that, you get rid of almost all logical fallacies and cognitive biases. The first thing is to take into account which things are actually important and which are not, the basic ontology. How often do people even ask that question? It’s good to have correct models and bad to have incorrect ones, but the first step is to have models at all.

Almost any model you build is not going to be complete. You need to take that into account, and there already is a name for that: model uncertainty. Often a thing is just a black box you can’t yet understand, especially if you are reflecting on yourself and your emotions, in that cases intuition (or felt sense) has a crucial role to play. The same is obviously true for decisions which need to be made in a second, that’s not enough time to build a model. I’m not saying you need to replace all of your thinking with explicit modeling. When it comes to learning new things, figuring things out and problem solving, explicit modeling is necessary. It seems it would be good to have that skill developed well enough to have it ready for deployment at the level of reflex.

When you have a disagreement with someone, the reason is you don’t have the same models. If you simply explain what your model of the phenomena under discussion is, and how you came to that model, and the other person explains their model, it will be clear what the source of the disagreement is.

There are various specific techniques for becoming more rational, but a large part of those are simply examples of how you should substitute your intuition for explicit modeling. The same is the case for instrumental rationality, if you have a good model of yourself, of what you really want, of your actions, of the outcomes of your actions, of how to influence yourself to take those actions, etc. you will be able to steer yourself into taking the actions which will lead to the outcome you desire. The first thing is to build a good explicit model.

It’s good to know how our minds can fail us, what the most common cognitive biases and mistakes are. If you know what the failure pattern is, then you can learn to recognize and avoid that kind of failure in the future. There’s another way of avoiding mistakes: building good models. If you simply focus on the goal and how to achieve it, you can avoid cognitive biases without knowing they exist.

I’m not saying that knowing certain problem-solving techniques is bad, but becoming better at modeling things from first principles often beats learning a lot of specific techniques and failure patterns. Things may be different in professional setting where you do a highly-specialized job which requires a lot of specific techniques, but I’m talking about becoming more rational in general. Living your own specific life is not a wide area of expertise – it’s just you, facing many unique problems.

The other way general modeling beats specifics is the following example: there’s no benefit to driving faster if you’re driving in the wrong direction. Techniques give you velocity, modeling gives you direction. Especially if you go wide enough and model things from first principles, as Elon Musk calls it. It focuses you on the problem, not on actions. The problem of specific techniques is they are action-focused but to really solve a novel and unique problem you need to be problem-focused. Before you know what action to take, you need to understand the problem better. This is also known as Theory of Change in some circles. The effect of knowing where you are going is larger than the effect of anything else in getting you to your destination. The most important thing is prioritization.

Alternative to that is reasoning by analogy. The problem with analogies is they break down if the things you are analogizing are dissimilar enough. The more complex the problem is, the more complex the environment, for example you are talking about a project involving multiple people, the analogy will work less and less well. When talking about small simple systems, analogies can work but you need to take care to check if the analogy is valid or not. To do that you need explicit models: how are the two things similar and how are they different; is any difference important enough to break the analogy?

One way to improve your modeling capabilities is by practice. Simply, when you think about something, you can pay extra attention to various tools of modeling (ontology, graphs, functions, probabilities…) depending on which tool you want to be better at using. That way after certain amount of practice your brain will become better at it and it will become, if not automatic, then at least painless. Other thing which improves the ability is simply learning a lot of already existing models from various disciplines, in that way you gather more tools with which to build your own models. Areas of special interest for that purpose seem to be math, physics, computer science, and philosophy.

Humans have limited working memory, so it’s crucial to first focus on the most important things only. Later you can build on your model in an iterative way and make it more complex by decomposing things or introducing new things in the ontology. The problem with increasing complexity is, the brain is not good at nuance. If you include a lot of things in your ontology, where the most important thing is several orders of magnitude greater than the least important thing, the brain is not good at feeling that difference. By default all things seem to have the same importance and you need to consciously remind yourself of the relative importance of each thing. That’s the reason why after you have a lot of complexity built up, figured out what the important things are, you need to simplify to important things only.

This is all system 2 thinking, as it’s called in behavioral economics. There may be ways to debias yourself and get your system 1 more in line with system 2, but the best way I’ve found to do that is again, by building models and thinking about them. If you do that, your system 1 will with time fall more and more in line with system 2. Even if it doesn’t fall in line, you will learn to rely on models more in cases they work better. Bankers use exponential discounting (at least when the stakes are high, for money in professional setting) even if their system 1 is using hyperbolic discounting. The reason why they are able to be unbiased there is because they have a good model.

We can avoid a huge amount of irrationality by just focusing more on explicit model building. If every time we had a problem our first impulse was to build an explicit model of the problem, there would be a lot more rationality in the world. The problem in most cases is not that people have incorrect models, they don’t have explicit models at all and are relying on intuition and analogy instead. If it could in any way be said they have incorrect models the

Spreading the meme of “first principles” can be hard because of the common misunderstanding which says that when you build models you are reducing the thing you are modeling to a model. For example, when effective altruists try to estimate the cost of saving a life someone may say “you can’t put a dollar sign on a life” or something similar. This of course makes no sense because no one is really reducing the phenomena which are being modeled to a model. We all know that the real thing is more complicated than a model of the thing would be. Some models can be overly simple, but that is the problem of the model, not of the practice of modeling itself. If your model is too simple, just add the things which are missing. In practice the error of creating overly-complex models full of irrelevant things and with no important ones, seems just as common.

Of course, building good models is hard. Now that we at least know that modeling is a thing we are trying to do when we are trying to be rational, we can at least think about that in a principled way. We can build models about how to be better able to build better models.

Self-improve in 10 years

Change is hard. Especially if for the last month you did nothing else but sleep 12 hours per day and browse reddit while awake. Years ago, that was the state I was in. Then, my current level of productivity would be unimaginable. This blog post is a rough sketch of how that change happened.

Realizing my brain makes mistakes

At least I had the motivation to read. Reading wikipedia I found the page on cognitive biases. Learning about things like social psychology, neuroscience, behavioral economics and evolutionary psychology made me better understand how the mind works. That kind of knowledge is important if you are trying to change the way your mind works. It also made me less judgemental towards myself.

Main resource: Thinking, Fast and Slow.

Recognizing the mistakes which lead to personal problems

There are certain types of mistakes, like all-or-nothing thinking, which lead to unhappiness. The field of cognitive-behavioral therapy studies errors like this and psychologists have devised practical exercises for removing that kinds of errors from you thinking. Doing the exercises really makes the difference, just reading will not help as much.

Main resource: Feeling Good.

Habit formation and habit elimination

Some bad habits I completely eliminated were gaming addiction, reddit addiction, various web forums addiction, too much soda/coke. Good habits: stabilize your sleep pattern, eat healthier, exercise, gratitude journal. Self-improvment largely just consists just out of habit-optimization. You will fail many times. The key is getting up and trying again.

Resources: I don’t even know! This video is good, I hear the book The Power of Habit is not bad but I have not read it. Read from multiple soruces and try things out until they start working.

Getting things done

This is just another habit but it deserves a separate section. I don’t know how people survive without todo lists anymore. Once you write every boring thing down so you don’t need to remember it, your mind is free to do creative stuff.

Main and only resource: Getting Things Done.

Mindfulness meditation

Another habit which deserves a separate section because it is awesome and there are also some scientific indications it is awesome. While medidating you are practicing your mind which results in having better focus and better metacognition.

Resources: Mindfulness in Plain English, UCSD guided meditations, UCLA guided meditations, Sam Harris guided meditations.

Gaining practical skills

If you learn some skill for which is subjectively worth to you $5 a day, that is over $50,000 dollars in next 30 years. One of the most important skills were touch typing and speed reading, since I type and read every day.

Resources: keybrBreakthrough Rapid Reading, Coursera, Udacity.

Gaining social skills

This may come hard for some people but the benefits are huge, as a large part of life satisfaction (or misery, if you do it wrong) comes from interaction with other people. This is still much work in progress for me.

Main resources: Nonviolent CommunicationHow to Win Friends and Influence People, rejection therapy. If you are male and have problems with romantic relationships, Models may help.

Exploring productivity tricks and decison making tecnhiques

Aversion factoring, goal factoringimplementation intentions, non-zero days, pre-hindsight, and other techniques gathered mostly from lesswrong posts like: A “Failure to Evaluate Return-on-Time” FallacyHumans are not automatically strategicDark Arts of RationalityTravel Through Time to Increase Your Effectiveness.

Resources: Clearer Thinking has a lot of useful exercises.

It depends where you are coming from, but self-improvement is usually hard, and it may take you 10 years. I hope this list will be useful to others.

 

Irrational unhappiness

Your beliefs influence the way you react to the world around you. Every experience you percieve is first processed by your brain, and only the interpretation of the experience triggers the emotional response. If your perceptions of the world are biased, your emotional responses will also be biased. Cognitive science has already identified a lot of ways in which human thinking goes wrong and this list is a similiar attempt to map the specific ways in which certain irrational thought patterns lead to bad outcomes. Naming those patterns makes them more noticable and easier to correct in our day-to-day thinking. The examples in this post are taken from Feeling Good.

All-or-nothing thinking

Example: A straight-A student gets a B on an exam and thinks “Now I’m a total failure.”

This results from modeling the world in a binary way instead of using a more realistic continuous model. To use a trivial example, even the room you are sitting in now is not perfectly clean or completely filled with dirt, it is partially clean. By modeling the cleanliness of the room by having just two states, ‘clean’ and ‘dirty’, you are losing a lot of information about the real state of the room.

Overgeneralization

Example: A young man asks a girl for a date and she politely declines, and he thinks “I’m never going to get a date, no girl would ever want a date with me.”

The man in the example concluded that because one girl turned him down once, she would always do so, and he would be turned down by every woman he ever asks out in his life. Everything we know about the world around us tells us the probability of such scenario is very low. Before you conclude anything you should think about how to interpret the event in light of your background knowledge, or potentially use a larger sample size.

Mental filter

Example: A college student hears some other students making fun of her best friend and thinks “That’s what the human race is basically like – cruel and insensitive!”

The negative aspects of a situation disproportionally affect the thinking about the situation as a whole, thus percieving the whole situation as negative. So the college student from the example overlooks the fact that in the past months few people, if any, have been cruel of insensitive to her. Not to mention the human race has demonstrated many times it is not cruel and insensitive most of the time.

Disqualifying the positive

Example: Someone recieves a compliment and thinks “They’re just being nice”, and when they succeed at something they say “It doesn’t count, that was a fluke.”

That’s like a scientist intent on finding evidence to support his pet hypothesis and rejecting all evidence at the contrary. Whenever they have a negative experience they say “That just proves what I’ve known all along”, but when they have a positive experience they say it’s just a fluke.

Mind reading error

Example: Someone is giving an excellent lecture and notices a man in the fron row yawning, then he thinks “The audience thinks I’m boring.”

This is just making assumptions wih not enough evidence and taking them as truth, while not cosidering the many other possible explanations of the same phenomena. In the example above, the man yawning maybe just didn’t get enough sleep last night.

Fortune teller error

Example: Someone is having trouble with some math problem and thinks “I’m never going to be able to solve this.”

This is assigning 100% probability to just one future possible outcome, while there are many possible outcomes, with some uncertanty about each one of them, and each of those outcomes should be considered.

Emotional reasoning

Example: “I feel inadequate. Therefore, I must be a worthless person”

Taking your emotions as evidence is misleading because your emotions reflect your beliefs, and if your beliefs are formed in a biased way, this is just propagating the error further. There are more things to take into account than just your emotions, and since your emotions can be highly unreliable, there are situations where they need to be completely reevaluated insead of taking them into account automatically.

Should-rules

Example: “I should be well-prepared for every exam I take”

All else being equal, it would be better if you are well-prepared for each exam, but when your all-too-human performance falls short of your standards, your should-rules create self-loathing, shame, and guilt. In a similiar way, if the behavior of other people falls short of your unrealistic expectations, you’ll feel bitter and self-righteous. Not being well-prepared for one exam does not make the whole situation a lot worse, and it is exepcted a certain amount of faliures will happen over time. There is no need to attach a negative moral component to someting which is a normal occurence.

Mislabeling

Example: A woman on a diet ate a dish of ice cream and thought, “How disgusting and repulsive of me, I’m a pig.”

When describing yourself you should look at all behaviors and beliefs you have, not just the one thing that is most prominent in your mind at a single point in time. You cannot be equated with just a single thing you once did – the label is too simplistic. The problem boils down to ignoring a large part of your behaviour and considering only a small subset of it.

Personalization

Example: When a mother saw her child’s report card, there was a note from the teacher indicating the child was not working well. She immediately decided, “I must be a bad mother. This shows how I’ve failed.”

There are a large number of factors influencing a certain outcome, just one of which is the influence you have on other people. You do not have a complete control over other people and the events related to them. Since you have only partial influence, it is important to take the whole picture into account, instead of arbitrarily concluding that what happened was your fault or reflects your inadequacy, even if you were not responsible for it.

~

The underlying theme in all of these thought patterns is an incomplete way of thinking.

Our models of the world are oversimplified:

  • All-or-nothing thinking
  • Emotional reasoning
  • Should-rules
  • Personalization

The data we consider as evidence is radically incomplete:

  • Overgeneralization
  • Mental filter
  • Disqualifying the positive
  • Mislabeling

The nuber of hypotheses we consider is way to small:

  • Mind-reading error
  • Fortune-teller error

Why fear AI? Have Stephen Hawking and Elon Musk gone insane?

What are these people who fear AI even talking about? This is the post to answer that question. The ideas presented here come mostly, but not exclusively, from Bostrom’s Superintelligence book.

~

  • The science of artificial intelligence is progressing, and the rate of progress is probably not slowing down. Let’s just assume the rate of progress is constant.
  • We know it’s possible to construct physical systems with at least human level of intelligence, for we are such systems, built by evolution.
  • We are the first intelligent species to evolve on this planet. When something first appears in evolution, it is often very simple when compared to later forms, so, comparing to all possible levels of intelligence which could be attained, the chance we are at the top is low. Relative to the time scale of human evolution we developed culture very recently, so it’s more likely we are near the stupidest level of intelligence needed in order to have culture.

So, it’s very probable we will one day construct AI more capable than humans. Experts on AI have traditionally been incorrect about predicting the arrival of human-level AI, but it’s worth mentioning they currently think there is a 50% chance of smarter than human AI appearing before 2050.

~

There is more than one way to create a system more intelligent than humans today, most importantly:

  • simulating the brain on the computer, known as “whole brain emulation”.
  • programming computers to be smarter, known as “general artificial intelligence”. (in this simply termed “AI”)

When we say “AI“, you can think of “super-capable goal-achiever”, something very different from humans. It’s not necessary for it to have consciousness or emotions, it may be more like a force of nature than a human mind.

~

We could be smarter than we are if:

  • our neurons were faster. There is a large probability AI or brain emulations could be a lot faster than our brains.
  • we had more neurons. Our computers can be scaled easily to sizes of warehouses.
  • we could think about more stuff at the same time. Human brain can keep just around seven things in it’s “working memory” at a time. It’s hard even to imagine how we would think if we could think about hundreds, or thousands, or millions of things in the same moment.
  • we remembered more stuff. Our computers are great at remembering things, far beyond the human capacities.
  • we had more input data from various sensors, did not grow tired, and our brains were built from more reliable parts. Our computers… I’m sure you get it.
  • we could run non-damaging detailed experiments with our brain, we could find some method to make it smarter. That could easily be done on a brain emulation or on computer code.
  • we had more copies of ourselves; then we (the sum of all copies) would be stronger than just one instance. Computer programs can easily be copied.
  • we were more coordinated with each other. The copies mentioned in the previous point would be better at coordination because they have the same goals.
  • we could share our memories and skills with each other. AI programs could share data and algorithms with each other.

When we take this into consideration, the AI could potentially become vastly more capable than us. Once we have something like a human-level AI, it would be easy to improve it by adding more hardware and more copies of it. Then it could start working on improving itself further, and further, and further.

~

The main reason we are the dominant species on this planet is because of our intelligence, which enables culture – the way to store information for new generations – and technology – the ability to change our environment to our desires. When it comes to having power over your environment, it’s intelligence that counts. If AI would be smarter than us, it would also be better at storing information and making new technology, which means better at changing it’s environment to it’s desires. AI could become better than humans in any mental task imaginable, such as: scientific and technological research, including AI research, strategic planning and forecasting, social and psychological modeling, manipulation, rhetoric persuasion, and economic productivity in general.

~

Physics sets some limits on the stuff we can do. Still, the range of possibilities for the future is great. A famous scientist John von Neumann proposed one day building a kind of spacecraft which could travel to other star-systems and even galaxies, make copies of themselves, and send the copies to colonize further stars and galaxies. Travelling at 50% speed of light, we can reach 6*10^18 of stars with those kinds of spacecrafts, which is around 10 million galaxies. Placing humans in those spacecrafts, if 1% of stars have planets which can be made habitable through terraforming, with each spacecraft colonizing such planet upon landing, that results to in sum around 10^34 human lives to be lived until the unverse becomes inhabitable. If we construct O’Neill cylinders, that would be about 10^43 human lives. The future could be great. Also, the AI could have a lot of stuff it could shape to it’s desires.

~

We easily notice differences between the minds of people around us, but still those differences are small when we compare our minds to other biological species. Now compare a human brain to an AI. In relative terms, the two human brains are nearly identical while the difference between us and AI would be vast. Don’t imagine AI as “something like human just smarter and different” – imagine it as super-capable goal-achiever, mathematically choosing the most efficient way to achieve a goal. The only reason we have the goals we have is because of evolution, but AI did not evolve like we did. To the AI, it would not be obvious that some things are right and some things are wrong. It would just try to achieve whatever goal it has been given by it’s programming.

~

If you have some goal or combination of goals, there are some sub-goals which are almost always useful to achieve. Whatever your current goal, you are more likely to achieve it if:

  • you don’t die soon
  • your current goal doesn’t change
  • you become smarter
  • you have better technology
  • you have more resources.

~

To illustrate the last two paragraphs let’s take a silly example: what would happen if we had an AI (super-capable goal-achiever) which had a goal of maximizing the number of produced paperclips in it’s collection? It could start building nanotechnology factories, solar panels, nuclear reactors, supercomputer warehouses, rocket launchers for von Neumann probes, and other infrastructure, all to increase the long-term realization of it’s goals, ultimately transforming the large part of the universe into paperclips. If instead we give it a goal of producing at least one million paperclips, the result would be the same, because the AI can never be completely sure it has achieved it’s goal, and each additional paperclip produced increases the probability it has achieved the goal. Also, it could always invest more resources into additional backup system, defense, additional checks (recounting the paperclips), etc. It’s not that this problem can’t be solved at all. The point is it is much easier to convince oneself that one has found a solution than it is to actually find a solution. The same principles holds for all of the problems here presented.

~

What if we made an AI which does just what we want? The AI listens to our wishes, and sets them as it’s final goal. The problem is, our wishes can be “fulfilled” in ways we didn’t want them to be fulfilled. Some examples:

  • Final goal: “Make us smile”. Unintended result: Paralyze human facial muscles to form constant beaming smiles.
  • Final goal: “Make us happy”. Unintended result: Implant electrodes into the pleasure centers of our brains.
  • Final goal: “Act so as to avoid the pangs of bad conscience”. Unintended result: Remove the part of our brain that produces guilt feelings.
  • Final goal: “Maximize your future reward signal”. Unintended result: Short-circuit the reward pathway and clamp the reward signal to its maximal strength.

~

Let’s make AI follow the Asimov’s three laws of robotics. Take the first law: a robot may not injure a human being or, through inaction, allow a human being to come to harm. This would make AI very busy, since it could always take some action which would reduce the probability of a human being coming to harm. “How is the robot to balance a large risk of a few humans coming to harm versus a small risk of many humans being harmed? How do we define “harm” anyway? How should the harm of physical pain be weighed against the harm of architectural ugliness or social injustice? Is a sadist harmed if he is prevented from tormenting his victim? How do we define “human being”? Why is no consideration given to other morally considerable beings, such as sentient nonhuman animals and digital minds? The more one ponders, the more the questions proliferate.” – Nick Bostrom

~

What if we just say: maximize pleasure and minimize pain in the world? How do we define pleasure and pain?  This question depends on many unsolved issues in philosophy. It needs to be written in a programming language, and even a small error would be catastrophic. As Bertrand Russell said, “Everything is vague to a degree you do not realize till you have tried to make it precise.” Consider AI taking hedonism as it’s final goal, realizing simulated brains are more efficient than biological ones, and then maximizing the number of simulated brains, keeping them in an infinite loop of one second of intense pleasure. This simulated brains would be more efficient if they were simpler, so the AI reduces them as far as it can, removing memory, language, and strip the brain just to the “pleasure centers”. If the AI is wrong about what pleasure means, and which physical processes generate pleasure, the universe will be not be filled with pleasure, but with “processes that are unconscious and completely worthless—the equivalent of a smiley-face sticker xeroxed trillions upon trillions of times and plastered across the galaxies. – Nick Bostrom

~

So let’s say we be super careful about giving the goal to the AI and give it some super nice goal. We keep the AI’s capabilities limited, slowly increasing them, in each step making ourselves sure AI is not a threat by testing the AI behavior in some kind of “sandbox” controlled safe environment. As AI becomes more capable, becomes used in many domains of economy, makes less mistakes, and becomes more safe. At this point, any remaining “alarmist” would have several strikes against them:

  • A history of alarmists predicting harm from the growing capabilities of robotic systems and being repeatedly proven wrong.
  • A clear empirical trend: the smarter the AI, the safer and more reliable it has been.
  • Large and growing industries with vested interests in robotics and machine intelligence.
  • A promising new technique in artificial intelligence, which is tremendously exciting to those who have participated in or followed the research.
  • A careful evaluation of seed AI in a sandbox environment, showing that it is behaving cooperatively and showing good judgment.

So we let the AI into the wild. It behaves nicely at first, but after a while, it start’s to change it’s environment to achieve it’s final goals. The AI, being better at strategizing than humans, behaved cooperatively while weaker, and it started to act on it’s final goals only when it became strong enough it knew we couldn’t stop it. It’s not that this problem can’t be solved, it’s just that “each time we hear of a seemingly foolproof security design that has an unexpected flaw, we should prick up our ears. These occasions grace us with the opportunity to abandon a life of overconfidence” – Nick Bostrom

~

Imagine three scenarios:

  1. Peace
  2. Nuclear war kills 99% of the worlds population.
  3. Nuclear war kills 100%.

Obviously, we prefer 1 over 2 over 3. How big is the difference between these scenarios? The difference in the amount of people killed is very large between scenario 1 and scenario 2, but not so large between scenario 2 and 3.  More important is the difference in how bad these scenarios are. Here, the difference between 2 and 3 is much larger than difference between 1 and 2 – because if 3 comes to pass, it’s not only the present people that are killed, but also the whole future is destroyed. To put this into perspective, imagine what a shame it would be if we went extinct, for example, a 1000 years ago. Also, counting all human lives across space and time, much more lives, friendships, loves, and experiences in general would be lost in case of scenario 3, than in case of scenario 2. Even if we don’t leave Earth the total number of people to exist in the future could be high as 10^16, if 1 billion people lived on earth at a time, each person a 100 years, for a total of next 1 billion years. An argument can be made that reducing the probability of extinction by 0.0001% could be more valuable than lives of all people living on Earth today. The values become even more mind-boggling if we consider the number of 10^43 of lives, mentioned earlier in this post.

~

The problem of expressing human values in a programming language and placing them into AI is extremely hard. Also, it is arguably the most important problem in the world today. Also, we don’t know how to solve it. But that is not the end. In addition to that, human brains are really bad at thinking about this kinds of things. Just a couple of examples:

  • We like to reason from past examples. We don’t have past examples of greater-than-human AI, or any extinction event, so we underestimate it’s probability.
  • We think “we knew it all along” even if we didn’t know it all along, and in line with that, we think past catastrophes were more predictable than they actually were. We actually can’t predict as well as we think.
  • It’s hard for us to change our opinions. For people like me, who have formed an opinion that technology is generally good, and anti-technology people in general have had bad arguments in the past, it’s hard to hear about AI risks and take them seriously.
  • We have trouble with big numbers, for example, it’s the same to us if 2,000, 20,000 or 200,000 of birds get saved from drowning in oil ponds. The numbers involved in the future of humanity are extremely large.
  • We can measure the cost of preparing for catastrophes, but we can’t measure the benefits, so we focus on costs more. “History books do not account for heroic
    preventive measures.” – Nassim Taleb

~

This is just the beginning. You can read about the rest of the problems, and proposed solutions to those problems, in Bostrom’s Superintelligence book. The problem is harder than it looks, we don’t know how to solve it, and if we don’t solve it we will go extinct.

The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else. – Eliezer Yudkowsky

Human brains, if they contain information relevant to the AI’s goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format. – Nick Bostrom

Psychology of Computer Programming: Conquering the Imposter Syndrome

Recently I came across a lot of posts about psychological problems related to computer programming. As I read them, I realized those were the same problems I had. The more I read, the more I searched for solutions. This post is a travelogue of my discoveries. Just to warn you, there are a lot of links in the text ahead, and you don’t need to click on every one to understand the text (beware of tab explosion). If you were to open any links at all, first I would recommend “How to make mistakes”, a short essay by Daniel Dennett.

The problem

Two things are going on that are literally driving programmers crazy. One is something known as the “imposter syndrome.” That’s when you’re pretty sure that all the other coders you work with are smarter, more talented and more skilled than you are. You live in fear that people will discover that you are really faking your smarts or skills or accomplishments. The trap of imposter’s syndrome is that programmers think they need to work harder to become good enough. That means spending more time coding — every waking minute — and taking on an increasing number of projects. That feeling is called the “Real Programmer” syndrome as named by a post that went crazy on Reddit last week. The Real Programmer lives only to code. – The Stress Of Being A Computer Programmer Is Literally Driving Many Of Them Crazy

He said: “Deep down know I’m ok. Programming since 13, graduated top of CS degree, got into Microsoft – but I feel like I’m an imposter.” I told him, straight up: You Are Not Alone. – I’m a phony. Are you?

From the outside, it would appear I was on the textbook path of programming. Started making websites at 15. Took programming and web design classes in my tech-oriented high school. Was accepted by my first choice school and majored in Computer Engineering. Had great internships at a tech giant. Wrote code that was used by millions of people. Graduated with distinction. Cofounded a software startup. And yet despite doing everything right, I didn’t think of myself as a good programmer. Impostor Syndrome instilled in me a deep fear of failing. I was afraid to speak up or ask questions for fear of saying something stupid, and people would find out I didn’t really know my stuff. – Overcoming Impostor Syndrome

Not long ago one of our programmers just lost it and he lost it good. He walked into the manager’s office and began screaming strange things. If I didn’t know him as well as I did I would have thought that he was on some kind of drug. But what had really happened was nothing short of a complete mental breakdown. – I Knew a Programmer that Went Completely Insane

What is a Real Programmer, you might ask? A Real Programmer is someone who loves programming! They love it so much that it’s what they spend all their time doing. In fact, a Real Programmer loves programming so much that they’re happy just to have the chance to do it. Paying them is just a formality because the Real Programmer doesn’t really consider it “work”. (…) It permeates the industry’s culture. You hear it from fellow programmers, managers, and investors. If you want to succeed as a programmer you have to at least look like a Real Programmer even if you’re not one at heart. So you get people working evenings and weekends just for appearances and they start to burnout. – IT Professional absolutely nails the “Real Programmer” mindset that is so pervasive in IT workplaces

Yet, like plenty of other fellow programmers, I feel completely worthless. It does not come a day where the impostor syndrome makes me feel that all I have managed to achieve is the result of simple luck. – I don’t want to be a Real Programmer

Branching out into other fields, having hobbies other than programming can be a tremendous benefit to your day job. You don’t need to burn a bazillion hours writing code. Burn that time writing, or reading, or arguing with someone over coffee (or your favorite scotch!). Burn that time running, or lifting, or both. Don’t burn yourself out to be a better programmer. Do what you love, and love many things. You will be better for it. – How to be a sane programmer

The type of thinking about the need to work long hours is dangerous because it is deceiving and can end up killing you. The drive for perfection is unfortunately a journey to insanity. Plain and simple, constantly tweaking something without delivering is a developer’s pit of despair. When I first arrived at New Relic I felt, like most do (at least that’s what I tell myself to help me sleep at night), that I was drinking from a fire-hose. I was intimidated by all of the amazing talent that is here, surrounded by experts in their fields, polyglots and so on. – Nerd Life Balance Part 2: Behaviors That Destroy the Balance

Solutions

(the next two sections are taken from Google I/O 2009 – The Myth of the Genius Programmer)

There is no genius programmer

“Can you guys please give Subversion on Google Code the ability to hide specific branches?” “Can you guys make it possible to create open source projects that start out hidden to the world, then get ‘revealed’ when they’re ready?” “Hi, I want to rewrite all my code from scratch, can you please wipe all the history?”

So what do these all have in common? There’s a lot of insecurity going on, right? This is a common feeling that we all have. We’re actually getting these responses last year at I/O. People were coming up to me and saying these sort of things. So it got us thinking “Well, what’s going on with psychology here? What’s going on in people’s heads? Why do they want to hide their code so much? What’s really at the bottom of this?”

“A pervasive elitism hovers in the background of collaborative software development: everyone secretly wants to be seen as a genius.”

This is rooted out of a general desire to not look stupid. Everybody, I think, wants to look like a smart developer. I know I certainly do, to some extent. There’s a lot of different reasons behind why people do this, and we’re going to start with something seemingly unrelated. Why do people buy products endorsed by celebrities? Michelle Obama wore this dress to the Inauguration. Boom, suddenly, it sold out. Michael Jordan wears Nike shoes. Everyone wants to buy Nikes because they love Jordan or basketball. What’s really going on here? Do you actually believe that if you buy Air Jordans, you are going to be as good as Michael Jordan? There’s some nugget of human psychology going on here, where it’s in our instinct to find celebrities, find people to idolize and want to be like those people, and we sort of latch on to whatever simple behaviors or materialistic pieces that remind us of this celebrity or this behavior. That’s true in the world of programming as well. We have Linus Torvalds, to some extent, Bill Gates even. Guido here at Google — you know, I mean, he wrote Python himself, right? Not quite true, you know? Did Linus write Linux all by himself? Right. We have Kernighan and Pike and Kernighan and Ritchie. I mean, these guys don’t always deserve all the credit. They certainly deserve some of the credit. They’re the leaders, or they started something, but they’re mythologized. So the personas that they become are bigger than life, and, to some extent, rooted in a little nugget of truth or fact and a whole lot of myth. When we say “the myth of the genius programmer,” we’re talking about the myth of, “Hey, here’s a genius, and genius goes off in a cave and writes this brilliant thing and then reveals it to the world and, oh, my gosh, this person’s famous forever.” Reality is, that’s not really how it works at all. There are in fact, geniuses… they are so incredibly rare that it’s almost a meaningless term. That myth just isn’t true. So, the ultimate geek fantasy is to go off into your cave and work and type in code and then shock the world with your brilliant new invention. It’s a desire to be seen as a genius by your peers. But there’s a flip side to that too. It’s not just about, “I want to be a genius and shock the world.” It’s also, “I’m insecure.” And what I mean by that is, “I also don’t want people to see my mistakes. “All right, maybe I won’t be a genius. Maybe I won’t shock the world with my brilliance, but at least I don’t want them to see my trail of failures and mistakes, and I’m gonna cover my tracks. They want to be seen as being clever. Clever people don’t make mistakes. Right, exactly. So the result is people wind up working in a cave. A classic example of this is: how long will you drive around before asking for directions? It’s hard to admit that you’ve made mistakes sometimes, especially publicly. So that’s why we showed these quotes in the beginning with people saying “Can you erase my history? Can you hide my project until it’s perfect?”

Fail fast

Think about the way you interact with your compiler. You have a really tight feedback. You write a function, you compile it, make sure it at least compiles. Maybe you write a UniTest if you’re doing great. But nobody sits down and writes thousands and thousands of lines of code and then runs their compiler for the first time. It just doesn’t happen.

I think a big issue also around failure is just natural human fear. You know, I can relate to this personally. I started learning banjo a few years ago, playing in bluegrass jams. And they would occasionally try to call on me to do banjo solos, which is really, really hard to learn, and I just wouldn’t do it. Someone took me aside and he said “You realize that 50% of learning to solo is just not caring how good you sound and just losing the fear.” It was totally true. I was like, “All right, these are my friends. If I sound terrible, who cares?” And sure enough he was absolutely right. I started playing really bad solos, but it got better and better, and I kept learning, and that was a huge step. So if you can just make that mental shift and say “It’s all right. I’m gonna fail, and it’s not a big deal.” No fear. That’s fine. You move on. You learn. Executive makes a bad business decision, and the company loses $10 million for the company. The next morning, comes into work. His secretary says “The CEO wants to see you in his office.” And the guy hangs his head down. He’s like “This is it. I’m gonna get fired.” Walks into the CEO’s office, and he’s like “So I guess you want my resignation.” The CEO looks at him and says, “Resignation? “I just spent $10 million training you. Why would I fire you?” I lived in Italy for three years. I moved there, and I had been studying Italian, and I was really proud to use my Italian. I went into a cafe, and I ordered a sandwich, and they give me this massive sandwich, and I wanted a knife to cut it with. So I thought I’d be cool and use my Italian, and I promptly asked them for a toothbrush to cut my sandwich. The guy just looked at me. And I’m like, “Toothbrush.” And he’s like, “No.” But I never made that mistake again. Speaking languages in a foreign country is very intimidating. You’re just so scared of looking like a fool, but you don’t learn otherwise. Well, it’s the easiest way to learn. That sort of hot-white fear you get going up your neck because you asked for something embarrassing. It’s not just about embracing failure, but it’s also failing fast. Iterating as quickly as we can. This is something we actually talk about a lot at Google, was don’t just fail, fail quickly and pick up and try something different as fast as you can. And that’s why we’ve got this Google Labs now where people are experimenting with different projects. And if they fail, that’s fine. They’ll just put something up or change it the next day and try it again. The faster you can fail and the faster you can iterate, the faster you will learn and get better. If you practice, it makes your iteration-failure cycle faster. And it’s less scary to fail, because you’ll tend to have smaller failures. This way, the failures tend to get smaller over time, and the successes tend to get larger, and that’s a trend you’ll see, especially if you’re learning as you fail fast.

(the next two sections are taken from EMF2012 – Programming is terrible)

False dichotomy of “good” and “bad” programmers

Many blogs claim to elcuidate a dichotomy of programmers – good and bad. Upon careful inspection, most of them turn out to actually dictate the following types:

A. Programmers who are like me.
B. Programmers who are not like me.

The assertion is that if you copy their personality (like a cargo cult), you too can be a successful programmer. Sometimes it is more veiled:

A. Programmers who use my favourite language
B. Programmers who do not use my favourite language

Or:

A. Programmers who share my political beliefs
B. Programmers who do not share my political beliefs

Why do we do this? It’s easy and gets blog hits. Everyone loves a simple answer to a complex problem. Especially when the two choices are emotionally charged. Better still, when the good programmers have magical super powers. You’ll hear terms like rockstar, ninja, founder, entrepreneur, all used in the same pre-pubecsent machoism that our industrying is drowning in. Unfortunately, it’s total bollocks. The ‘some programmers are crazily more productive than others’ comes from a study, on batch processing vs interactive programming, in 1960. On twelve people. In a half hour session. We’ve been repeating this myth endlessly. it’s destructive. it’s either repeated by idiots who believe they have nothing to learn from others, or repeated by learners to explain why they shouldn’t try to learn.

So, are there two types of programmers? Probably not, but if I was to try, i’d say:

A. Programmers who know they will make mistakes
B. Programmers who think they will not make mistakes

It’s OK to write ugly code

Write code as if it were mistaken, and you will have to change it, again and again. because you will. Fail fast and repeatedly. It is easier to get something right by getting wrong a couple of times. It is easier to get it wrong a couple of times if you don’t write so much code from the outset. Try and think a little more about how the code will be called than how it works. It is far easier to change implementation over interface. Don’t be an artist. Don’t labour over the ‘right’ way to do things, but don’t paint yourself into a corner. Write code that is easy to replace, rather than extend. Bear in mind: It is OK to write ugly code. As long as the things using it don’t have to write uglier code to use it. As you get further in programming, you will understand the biggest problems are social, not technical.

Mistakes are expected

The other day, I came to the conclusion that the act of writing software is actually antagonistic all on its own. Arcane languages, cryptic errors, mostly missing (or at best, scattered) documentation – it’s like someone is deliberately trying to screw with you, sitting in some Truman Show-like control room pointing and laughing behind the scenes. At some level, it’s masochistic, but we do it because it gives us an incredible opportunity to shape our world.

How to Make Mistakes

Making mistakes is the key to making progress. There are times, of course, when it is important not to make any mistakes–ask any surgeon or airline pilot. But it is less widely appreciated that there are also times when making mistakes is the secret of success. What I have in mind is not just the familiar wisdom of nothing ventured, nothing gained. While that maxim encourages a healthy attitude towards risk, it doesn’t point to the positive benefits of not just risking mistakes, but actually of making them. Instead of shunning mistakes, I claim, you should cultivate the habit of making them. Instead of turning away in denial when you make a mistake, you should become a connoisseur of your own mistakes, turning them over in your mind as if they were works of art, which in a way they are. You should seek out opportunities to make grand mistakes, just so you can then recover from them.

There are no things every developer needs to know

For example, take this: The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) Yes, but what about those developers who don’t need to know a thing about character sets, like people who do numerical analysis, machine learning, or work with graphics, or work in a hundred other areas which don’t deal with character sets? It’s better to know something about unicode than not to know, but it simply is not the case every developer needs to know about unicode. The real reason people write title’s like that is because they generate traffic. There are generally two reasons to learn something: either it’s useful to you for the problem you are solving right now, or you are interested in it just for the sake of it. The problem with learning for any other reason is: the brain forgets things. If you are learning a thing with an idea “maybe I will need this someday” the probability you are going to forget it is high. Then, I hear a lot of talk about functional programming, vi, emacs, arch linux, etc. There’s also a theory that says knowing functional programming is going to make you a better programmer. It’s not clear if that theory is true, and evidence in favor of it is purely anecdotal. It may be the case that better programmers are more drawn to learning about functional programming (so, in technical language: it’s just selection bias and signalling) At least for me, learning functional programming did not make me a better programmer. Learning how to fail fast – did.

Perfectionism will kill you

If you are having problems with perfectionism, ask yourself the following:

  1. Is this really important or not? How important will it be in a year? In ten years? If it’s not that important, dont’ act like it is – because it’s not. Perfectionism is a tendency to overcommit, when what you really should be doing is optimizing the level of your commitment.
  2. Is there anything good in work that’s not done perfectly? Most things can’t be divided into absolute categories. For example, is the floor of your room perfectly clean? It’s not, there is always a degree of cleanliness, and the threshold for cleaning your room is not “zero dust”, if it were, then you would be cleaning your room all the time. What is the threshold after which the thing you are working on is good enough? Do not maximize in situations in which you should satisfice. It’s important to know the difference.
  3. Is perfectionism something which could be achieved? Let’s say someone offers you a bet: they’ll pay you 10 dollars if the six-sided dice rolls anything but 6, and you will pay them 10 dollars if it rolls a 6. And you lose. Does it make any sense to say “I shouldn’t have played that game”. Well, technically, you should have – it was the best decision to make with limited information you had, you didn’t know the future. In the same sense, a lottery player who, after seeing the winning combination, says “I should have played that combination” is wrong – in fact, he shouldn’t have played the lottery at all because expected gains are negative. There is no sense in saying the result should be better than it is if that would imply you being omniscient and omnipotent.

Inaction hurts more than action

Consider this scenario.You own shares in Company A. During the past year you considered switching to stock in Company B but decided against it. You now find that you would have been better off by $1200 if you had switched to the stock of Company B. You also owned shares in Company C. During the past year you switched to stock in Company D. You now find out that you’d have been better off by $1200 if you kept your stock in Company C.Which error causes you more regret? Studies show that about nine out of ten people expect to feel more regret when they foolishly switch stocks than when they foolishly fail to switch stocks, because most people think they will regret foolish actions more than foolish inactions. But studies also show that nineout of ten people are wrong. Indeed, in the long run, people of every age and in every walk of life seem to regret not having done things much more than they regret things they did, which is why the most popular regrets include not going to college, not grasping profitable business opportunities, and not spending enough time with family and friends. – Daniel Gilbert, Stumbling on Happiness

Learning how to program

This is one of those things that I always try to communicate, especially to students who are just starting out in computer science is that software, even though it’s fun to write code alone, you know, late at night in your basement, whatever, actually writing software that’s successful– it’s an inherently collaborative activity. And it actually forces you to deal with people and talk with people, and that’s why we encourage people to get involved in Open Source, because it’s sort of like, “Okay, well, maybe you’re still in college, but here’s your chance to actually work with people and work on a team and see what it’s gonna be like. I mean, one of the things I always ask people is, “Can you name a piece of software “that’s really successful, “really widely used by a lot of people, and was written by one person?” Fitzpatrick: And before anybody yells out Metafont, that’s not widely used, okay? But anyway, so this is a trap, okay? Of this sort of wanting to be a genius. – Ben Collins-Sussman

Some useful links you can go to (but you don’t need to, as indeed I didn’t use them all, it’s just a list my friends and I came up with):

“We now know a thousand ways not to build a light bulb” – Thomas Edison

“An expert is a man who has made all the mistakes which can be made, in a narrow field.” – Niels Bohr

The Search for Truth and Aesthetics of Pessimism

1b93c-adamandevefastingandhungerNow the Lord God had planted a garden in the east, in Eden; and there he put the man he had formed. The Lord God made all kinds of trees grow out of the ground—trees that were pleasing to the eye and good for food. In the middle of the garden were the tree of life and the tree of the knowledge of good and evil. (…) The Lord God took the man and put him in the Garden of Eden to work it and take care of it. And the Lord God commanded the man, “You are free to eat from any tree in the garden; but you must not eat from the tree of the knowledge of good and evil, for when you eat from it you will certainly die.” (…) Now the serpent was more crafty than any of the wild animals the Lord God had made. He said to the woman, “Did God really say, ‘You must not eat from any tree in the garden’?” The woman said to the serpent, “We may eat fruit from the trees in the garden, but God did say, ‘You must not eat fruit from the tree that is in the middle of the garden, and you must not touch it, or you will die.’” “You will not certainly die,” the serpent said to the woman. “For God knows that when you eat from it your eyes will be opened, and you will be like God, knowing good and evil.” When the woman saw that the fruit of the tree was good for food and pleasing to the eye, and also desirable for gaining wisdom, she took some and ate it. She also gave some to her husband, who was with her, and he ate it. Then the eyes of both of them were opened, and they realized they were naked;so they sewed fig leaves together and made coverings for themselves. – Bible, Genesis

There’s something strange and dark about knowledge, wisdom and truth, and this darkness has been the subject of many ancient myths and legends. Has this theme any basis in fact? One thing we know for sure: truth hurts. Let’s begin with a trivial example:

Suppose that you started off in life with a wandering mind and were punished a few times for failing to respond to official letters. As a result, you would be less effective than average at responding, so you got punished a few more times. Henceforth, when you received a bill, you got the pain before you even opened it, and it laid unpaid on the mantelpiece until a Big Bad Red late payment notice with an $25 fine arrived. More negative conditioning. Now even thinking about a bill, form or letter invokes the flinch response. The idea is simple: if a person receives constant negative conditioning via unhappy thoughts whenever their mind goes into a certain zone of thought, they will begin to develop a psychological flinch mechanism around the thought. The “Unhappy Thing” — the source of negative thoughts — is typically some part of your model of the world that relates to bad things being likely to happen to you. – Less Wrong: Ugh fields

The expression “harsh truth” is so familiar (try googling it) even cracked.com is talking about the 6 harsh truths that will make you a better person:

The human mind is a miracle, and you will never see it spring more beautifully into action than when it is fighting against evidence that it needs to change. Your psyche is equipped with layer after layer of defense mechanisms designed to shoot down anything that might keep things from staying exactly where they are — ask any addict. – 6 Harsh Truths That Will Make You a Better Person

Not only that, but there are scientific studies on something called optimism bias:

The optimistic bias is seen in a number of situations. For example: people believing that they are less at risk of being a crime victim, smokers believing that they are less likely to contract lung cancer or disease than other smokers, first-time bungee jumpers believing that they are less at risk of an injury than other jumpers, or traders who think they are less exposed to losses in the markets.

And self-serving bias:

When individuals reject the validity of negative feedback, focus on their strengths and achievements but overlook their faults and failures, or take more responsibility for their group’s work than they give to other members, they are protecting the ego from threat and injury. These cognitive and perceptual tendencies perpetuate illusions and error, but they also serve the self’s need for esteem. For example, a student who attributes earning a good grade on an exam to their own intelligence and preparation but attributes earning a poor grade to the teacher’s poor teaching ability or unfair test questions is exhibiting the self-serving bias.

Then we have the phenomenon of euphemisms:

It is obvious that the purpose of using euphemisms is to avoid something unpleasant or offensive. They come from psychological needs. Psychologically, if not linguistically, meanings can be defined by the sum of our responses to a word or an object. Words themselves may be seen as responses to stimuli. After a word has been associated for a long period of time with the stimuli that provokes it, the word itself picks up aspects of the response elicited by the stimuli object. When unpleasant elements of response attach themselves strongly to the word used to describe them, we tend to substitute another word free of these negative associations. In this way, psychologists tell us, euphemisms are formed. – Cultural Concepts and Psychological Tendencies in Euphemisms

george_carlinSometime during my life toilet paper became bathroom tissue… Sneakers became running
shoes. False teeth became dental appliances. Medicine became medication. Information became directory assistance. The dump became the landfill. Car crashes became automobile accidents. Partly cloudy became partly sunny. Motels became motor lodges. House trailers became mobile homes. Used cars became previously owned transportation. Room service became guest room dining. Constipation became occasional irregularity. (…) The CIA doesn’t kill anybody anymore. They neutralize people. Or they depopulate the area. The government doesn’t lie. It engages in disinformation. Poor people used to live in slums. Now ‘the economically disadvantaged’ occupy ‘substandard housing’ in the ‘inner cities.’ And a lot of them are broke. They don’t have ‘negative cash flow.’ They’re broke! Because many of them were fired. In other words, management wanted to ‘curtail redundancies in the human resources area,’ and so, many workers are no longer ‘viable members of the workforce.’ – George Carlin on Euphemistic Language

It looks like humans believe whatever they want to believe and ignore beliefs which are frightening and negative. This has obvious implications for our personal lives, but what about the big questions? The cognitive algortihm here seems to be the same, for example, take atheism:

530px-PaleBlueDot
Our posturings, our imagined self-importance, the delusion that we have some privileged position in the Universe, are challenged by this point of pale light. Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity, in all this vastness, there is no hint that help will come from elsewhere to save us from ourselves. The Earth is the only world known so far to harbor life. There is nowhere else, at least in the near future, to which our species could migrate. Visit, yes. Settle, not yet. Like it or not, for the moment the Earth is where we make our stand. It has been said that astronomy is a humbling and character-building experience. There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world. To me, it underscores our responsibility to deal more kindly with one another, and to preserve and cherish the pale blue dot, the only home we’ve ever known. – Carl Sagan, Pale Blue Dot

Copernicus, Kepler, and Galilei have shown us that we are not in the centre of the universe, Darwin has shown us we are not made in the image of god, and modern neuroscience is showing us today that free will, at least as we commonly conceptualize it, is an illusion.

The truth, indeed, is something that mankind, for some mysterious reason, instinctively dislikes. Every man who tries to tell it is unpopular, and even when, by the sheer strength of his case, he prevails, he is put down as a scoundrel. – H. L. Mencken

Men fear thought as they fear nothing else on earth, more than ruin, more even than death. Thought is subversive and revolutionary, destructive and terrible; thought is merciless to privilege, established institutions, and comfortable habit. Thought looks into the pit of hell and is not afraid. – Bertrand Russell

I write this to you, dear Elizabeth, only in order to counter the most usual proofs of believers. Every true faith is infallible. It performs what the believing person hopes to find in it. But it does not offer the least support for the establishing of an objective truth. Here, the ways of men divide. If you want to achieve peace of mind and happiness, have faith. If you want to be a disciple of truth, than search. – Nietzsche, Letter to his sister

Searching for truth, we tend not to search there where it hurts the most, as is demonstrated by confirmation bias:

Confirmation bias is the tendency of people to favor information that confirms their beliefs or hypotheses. People display this bias when they gather or remember information selectively, or when they interpret it in a biased way. The effect is stronger for emotionally charged issues and for deeply entrenched beliefs. People also tend to interpret ambiguous evidence as supporting their existing position.

The myth of Prometheus, can be interpreted metaphorically not as the fire-bringer, but as the truth-bringer:

proCSFor Boccaccio, “In the heavens where all is clarity and truth, Prometheus steals, so to speak, a ray of the divine wisdom from God himself, source of all Science, supreme Light of every man.” With this, Boccaccio shows himself moving from the mediaeval sources with a shift of accent towards the attitude of the Renaissance humanists. Using a similar interpretation to that of Boccaccio, Marsilio Ficino in the fifteenth century updated the philosophical and more somber reception of the Prometheus myth not seen since the time of Plotinus. In his book written in 1476-77 titled Quaestiones Quinque de Mente, Ficino indicates his preference for reading the Prometheus myth as an image of the human soul seeking to obtain supreme truth. As Olga Raggio summarizes Ficino’s text, “The torture of Prometheus is the torment brought by reason itself to man, who is made by it many times more unhappy than the brutes. It is after having stolen one beam of the celestial light […] that the soul feels as if fastened by chains and […] only death can release her bonds and carry her to the source of all knowledge.” (…) … Mary Shelley’s 1818 novel Frankenstein is subtitled “The Modern Prometheus”, in reference to the novel’s themes of the over-reaching of modern humanity into dangerous areas of knowledge.

Speaking of dangerous knowledge, there is a BBC documentary of the same name:

The film begins with Georg Cantor, the great mathematician whose work proved to be the foundation for much of the 20th-century mathematics. He believed he was God’s messenger and was eventually driven insane trying to prove his theories of infinity. Ludwig Boltzmann’s struggle to prove the existence of atoms and probability eventually drove him to suicide. Kurt Gödel, the introverted confidant of Einstein, proved that there would always be problems which were outside human logic. His life ended in a sanatorium where he starved himself to death.

Those matematicians may have been the inpiration for Darren Aronofsky and his film Pi:

Personal note: When I was a little kid my mother told me not to stare into the sun, so once when I was six, I did. At first the brightness was overwhelming, but I had seen that before. I kept looking, forcing myself not to blink, and then the brightness began to dissolve. My pupils shrunk to pinholes and everything came into focus and for a moment I understood.

pi_01Restate my assumptions: One, Mathematics is the language of nature. Two, Everything around us can be represented and understood through numbers. Three: If you graph the numbers of any system, patterns emerge. Therefore, there are patterns everywhere in nature. Evidence: The cycling of disease epidemics;the wax and wane of caribou populations; sun spot cycles; the rise and fall of the Nile. So, what about the stock market? The universe of numbers that represents the global economy. Millions of hands at work, billions of minds. A vast network, screaming with life. An organism. A natural organism. My hypothesis: Within the stock market, there is a pattern as well… Right in front of me… hiding behind the numbers. Always has been.

Hero from the film Pi “stared at the sun” for too long and ended in a tragic way. This reminds of another great myth:

daedalus-and-icarus
Often depicted in art, Icarus and his father attempt to escape from Crete by means of wings that his father constructed from feathers and wax. Icarus’ father warns him first of complacency and then of hubris, asking that he fly neither too low nor too high, because the sea’s dampness would clog or the sun’s heat would melt his toes. Icarus ignored instructions not to fly too close to the sun, and the melting wax caused him to fall into the sea where he drowned.

It also reminds of another individual who stared for too long:

He who fights with monsters should look to it that he himself does not become a monster. And when you gaze long into an abyss the abyss also gazes into you. – Nietzsche

We all know the Aesop’s fable of The Ant and the Grasshopper:

The fable concerns a grasshopper that has spent the warm months singing while the ant (or ants in some versions) worked to store up food for winter. When that season arrives, the grasshopper finds itself dying of hunger and begs the ant for food. To its reply when asked that it had sung all summer, it is rebuked for its idleness and advised to dance during the winter.

What does this have to do with anything? The ant took responsibility for his life, it was tough to work but he took reality seriously, while the grasshopper was living in the fantasy world. As in the Matrix, the ant took the red pill.

You take the blue pill – the story ends, you wake up in your bed and believe whatever you want to believe. You take the red pill – you stay in Wonderland and I show you how deep the rabbit-hole goes.

How long would the naive idealistic grasshopper survive amongst the hard-core cowboys on the western frontier?

A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects. – Robert Heinlein

Although Heinlein is speaking against the insects, he and the ant are here on the same side – you need to be tough and resiliant, and take the truth even when it hurts.

Despite our squeamishness about cultural stereotypes, there are tons of studies out there showing marked and quantifiable differences between Chinese and Westerners when it comes to parenting. In one study of 50 Western American mothers and 48 Chinese immigrant mothers, almost 70% of the Western mothers said either that “stressing academic success is not good for children” or that “parents need to foster the idea that learning is fun.” By contrast, roughly 0% of the Chinese mothers felt the same way. Instead, the vast majority of the Chinese mothers said that they believe their children can be “the best” students, that “academic achievement reflects successful parenting,” and that if children did not excel at school then there was “a problem” and parents “were not doing their job.” Other studies indicate that compared to Western parents, Chinese parents spend approximately 10 times as long every day drilling academic activities with their children. – Why Chinese Mothers Are Superior

Chinese are the ants while Westerners are grasshoppers. This approach to parenting is also known as “tough love”.

Man was, and is, too shallow and cowarldy to endure the tragic, divine-comedy of life. Upon looking into the Abyss, man becomes afraid. Unable to face the truth, he hides it from himself. Idealism is cowardice. Most men are unwilling to take responsibility for their own lives. They use Utopian Ideals to wait for a future “heaven on earth” to escape living. It is the strong who are pessimistic, they know man, and know that no Ideal or Ideology will ever change human nature. – Unknown, inspired by Oswald Spengler

Oswald_Spengler
The question of whether world peace will ever be possible can only be answered by someone familiar with world history. To be familiar with world history means, however, to know human beings as they have been and always will be. There is a vast difference, which most people will never comprehend, between viewing future history as it will be and viewing it as one might like it to be. Peace is a desire, war is a fact; and history has never paid heed to human desires and ideals… – Oswald Spengler

This is the aesthetics of pessimism: pessimism is bravery, idealism is cowardice. Pessimism is seen as a good in and of itself.

250px-Schopenhauer
The development of the intellect will at last extinguish the will to reproduce, and will at last achieve the extinction of the race. Nothing could form a finer denouement to the insane tragedy of the restless will. Why should the curtain that has just fallen on defeat and death, always rise again upon a new life, a new struggle, and a new defeat? How long shall we be lured into this much ado about nothing, this endless pain that leads only to a painful end? When shall we have the courage to fling defiance into the face of the will? To tell it that the loveliness of life is a lie and that the greatest boon of all is death. – Arthur Schopenhauer

Or, as the modern mass media Schopenhauer put it:

0
I think human consciousness, is a tragic misstep in evolution. We became too self-aware, nature created an aspect of nature separate from itself, we are creatures that should not exist by natural law. We are things that labor under the illusion of having a self; an accretion of sensory, experience and feeling, programmed with total assurance that we are each somebody, when in fact everybody is nobody. Maybe the honorable thing for our species to do is deny our programming, stop reproducing, walk hand in hand into extinction, one last midnight, brothers and sisters opting out of a raw deal. – Rust Cohle, True Detective

Truth is hard, so pessimistic beliefs became a signal of intellectual honesty, toughness, and wisdom. If you want to convey an image of a truth seeker, pessimism seems to be the way to go. Thus, pessimism bias is created. Pessimistic beliefs are rejected out of hand, because the pessimist is taken to be just another “truth seeker. This is a problem, because many pessimistic beliefs seem to be true.

If you are a pessimist, ask yourself is it because pessimism is rational or because you see pessimism as more noble than optimism.