Monthly Archives: January 2017

Structure of disagreements

How many communicating civilizations are there in the visible universe? Alice thinks there is lot. Bob thinks we are the only one. Why? Things you can take into account are:

  • How many stars are there?
  • Of those, how many have planets?
  • Of those, how many planets on average can potentially support life?
  • Of those, on how many does life actually develop?
  • Of those, how many are intelligent enough to develop civilization?
  • Of those, how many are communicating through space?
  • How long are they sending the signals out?

This gives us the Drake equation. There is a lot of different ways to disagree about this question. Even if Alice thinks life develops on just 0.001 potential planets and Bob thinks it’s 0.005, Bob may as well think we are the only civilization and Alice may think there are a lot of civilizations out there.

Even if the models they have become more similar, with Alice and Bob both moving to 0.002, Alice has become even more sure of there being a lot of civilizations and Bob became even more sure we are the only one.

The fundamental reason for disagreements is they have different models of the world. If they could go parameter by parameter and agree on each, they would eventually come to an agreement. They could take the last one (of all planets which have developed life, how many develop civilization) and decompose that into some plausible steps which are needed for intelligence to develop – development of cells, cell nucleus (eukaryotes), sexual reproduction, multicellular life, big brains in tool-using animals, etc. (examples taken from Robin Hanson) That will sure give them a lot of things to talk about. When they solve all that, they can move on to the next part they disagree about.

The problem of course being, disagreements in real life aren’t that simple. In real life you don’t have an equation which two people insert numbers into so you can inspect the numbers and see which ones are different. In real life it’s hard to know what the root cause of the disagreement is. If only we had such models, just imagine how easier would our life be! Luckily, you can build such models. You can do it on the fly in the middle of a conversation and in most cases it takes just a few seconds. And I’m not just talking just about Fermi estimates. Let’s call this more general practice quick modeling.

First you need to do quick modeling of your own reasons for believing things and explain it to the other person. The better you are at quick modeling the better you can explain yourself. It would be great if the other person did the same, but if they don’t, we can do it for them. If you do it in a polite way, to the other person it will look like you are simply trying to understand them, which you in fact honestly are trying to do. For example, Bob thinks Trump is better than Hillary is on the question of immigration and you disagree about that. There are different reasons to be concerned about immigration and when Bob tells you his reasons, you can try to put what he said in simpler terms using some basic model. Just ask yourself for starters:

  • What are, in fact, the things we are talking about? In case of immigration they could be: the president, laws, immigrants, natives, the economy, crime, stability, culture, terrorism, etc.
  • What are the causal connections between things we are talking about? In case of immigration, for example some people may think immigrants influence the economy positively, some negatively.
  • What is the most important factor here? Of all the things listed, what is the most important factor to Bob? Is there some underlying reason for that? What does Bob in general think is important?

The goal is to take what the other person is saying in terms of intuitions, analogies and metaphors and transform it into a model. In the previous example, you can think of it as a Drake equation for immigration. Imagine the world 5 years from now, in one Trump makes decisions on immigration (world T), in one Hillary does (world H). Since the topic is only immigration, don’t imagine the whole presidency, just limit yourself to keeping everything the same and changing only the parameter under discussion. Which one is better, i.e. what is the difference in utility between those two worlds?

Bob told you the reason why he thinks what he thinks and you built a quick model of it. The next step is to present that model to the other person to see if you understood the other person correctly. If you get it wrong, you can work together to make it more accurate. When he agrees that you got the basic model right, that’s the first step, you understand a core of his model. He maybe didn’t think about the topic in that terms before, the model may have been implicit in his head, but now you have it in some explicit form. The first model you build will be oversimplified and Bob may add something to it, and you should also try to find other important things which should be added. Take the most important part you disagree about and decompose it further. When you solve all important sub-disagreements, you are done.

Let’s take a harder case. Alice voted for Trump because he will shake things up a bit. How to build a model from that? First step: what are we talking about? The world is currently in state X and after Trump it will be in state Y. In the process, the thing which will be shaken up is the US government, which affects the state we find ourselves in, and Alice thinks Y will be better than X. What sort of things get better when shaken up? If you roll a six sided dice and get 1, doing it again will probably get you a larger number. So if you think things are going really terrible right now, you will agree with Alice. (modeling with prospect theory gives the same result) What gets worse when shaken up? Systems which are in some good state. Also you may noticed that most systems get worse when shaken up, especially large complex systems with lots of parameters (high dimensionality) because there is more ways to do something wrong than to do it right. On the other hand, if the systems are intelligent and self-correcting, such systems sometimes end up in local optimums, so if you shake them up you can end up in a better state. To which degree does the US government have that property? What kind of system is that anyway? May it be the case that some parts of the US government will become better when shaken up and other become worse when shaken up? The better you are at quick modeling the better you can understand what the other person is saying.

When you notice the model has become too complex or you have gotten in too deep, you can simply return to the beginning of the disagreement, think a bit about what the next most important thing would be, and try that route. You can think of the disagreement as a tree:

tree

At root of the tree you have the original disagreement, the nodes below (B and C in the picture) are the things upon which the disagreement depends on (e.g. parameters in the Drake equation), and so on further, what you think about B depends on D and E, etc. You can disagree about many of those, but some are more important than others. One thing you need to be aware of is going too deep too fast. In the Drake equation there are 7 parameters and you may disagree right at the start at the first one, the number of stars. That parameter may depend on other 5 things and you may disagree on the first one of those, etc. Two hours later you may resolve your disagreement on the first parameter, but when you come to the second parameter you realize that the first disagreement was, in relative terms, insignificant. That’s why you should first build a simple first approximation of the whole model, and only after that go decomposing the model further. Don’t interrupt the other speaker so you could dive into a digression. Only after you have heard all of the basic parts you have enough information to know which parts are important enough to decompose them further. Avoid unnecessary digressions. The same principle holds not only for the listener but also for the speaker who should try to keep the first version of the model as simple as possible. When someone is digressing too much it may be hard to politely ask them to speed up, but it can often be done, especially with when debating with friends.

In some cases, there may be just one such part of the equation which dominates the whole thing, and you may disagree on that part, which reduces the disagreement about A to a disagreement about B. Consider yourself lucky, that’s a rare find, and you just simplified your disagreement. In that case, you can call B a double crux.

Applying the quick modeling technique often with other people will reveal the fact that models of the world people have can be very complex. It may take you a lot of time to come to an agreement and maybe you simply don’t have the time. Or you may need many conversations about different parts of the whole model and resolve each part separately before coming to full agreement. Some people are not just not interested in discussions on some topics, some people are not intellectually honest etc. – the usual limitations apply.

Things which may help:

  • Think of concrete examples. Simulate in your head “how would the world look like if X”. That can help you with decomposition, if you run a mental simulation you will see what things consist of, what is connected, what the mechanisms are.
  • Fermi estimates. Just thinking about how you would estimate a thing will get you into a modeling state of mind, and putting the actual numbers in will give you a sense the relative importance of things.
  • Ask yourself why do you believe something is true, and ask the same about the other person. You can say to them in a humble tone of voice “… that’s interesting, how did you reach that conclusion?” It’s important for you to actually want to know how did they reach that conclusion, which in fact you will want to know if you are doing quick modeling.
  • Simplify. When things get too complex, fix the value of some variable. This one is identical to advice from LessWrong so I will borrow an example from them: it’s often easier to answer questions like “How much of our next $10,000 should we spend on research, as opposed to advertising?” than to answer “Which is more important right now, research or advertising?” The other way of simplifying is to change just one value while holding other things constant, what economists call ceteris paribus. Simulate how the world would look like if just one variable changes.
  • Think about what the edge cases are, but pay attention to them only when they are important. Sometimes they are, mostly they are not. So, ignore them whenever you can. The other person is almost always talking about the average standard case and if you don’t agree about what happens on average there is no sense to discuss weird edge cases.
  • In complex topics: think about inferential distance. To take the example from LessWrong: explaining the evidence for the theory of evolution to a physicist would be easy; even if the physicist didn’t already know about evolution, they would understand the concepts of evidence, Occam’s razor, [etc… while] explaining the evidence for the theory of evolution to someone without a science background would be much harder. There may be some fundamental concepts which the other person is using and you are not familiar with, ask about that, think about what you may learn from the other person. Also, try to notice if the other person doesn’t understand some basic concept you are using and try to, in a polite non-condescending way, clarify what you mean.
  • In complex topics: think about difference of values. If you think vanilla tastes better than chocolate and the other person disagrees, that’s a difference of values. You should separate that from the model of how the external world works, and focus on talking about the world first. In most cases it makes no sense to talk about values of different outcomes when you disagree even about what the outcome will be. What sometimes looks like a difference of values is often connected to what you think about how the world works. Talk about values only if you seem to completely agree about all of the relevant aspects of the external-world-model. When talking about values, also do quick modeling, as usual.
  • Practice.
  • And most important for last: become better at quick modeling in general.

There was a post on LessWrong about a method called Double Crux for resolving disagreements but I think quick modeling is a better more general method. Also, in the Double Crux blog post some things were mentioned which I think you should not worry about:

  • Epistemic humility, good faith, confidence in the existence of objective truth, curiosity and/or a desire to uncover truth, etc. It is not usually the case those are the problem. If at some point in the discussion it turns out some of those are a problem it will be crystal clear to you and you can just stop the discussion right there, but don’t worry about that beforehand.
  • Noticing of subtle tastes, focusing and other resonance checks, etc. Instead on focusing on how your mind may be making errors and introducing subtle biases, do what scientists and engineers did for centuries and what works best: look at the world and build models of how the world works. When Elon Musk is designing batteries he doesn’t need to think about subtle biases of his mind, he needs to be thinking about the model of the battery and how to improve it. When bankers are determining interest rates they don’t need to think about how their minds do hyperbolic discounting, they can simply replace that with an exponential discounting. The same goes with disagreements, you need to build a model of what you think about the topic, of what the other person thinks, and decompose each sub-disagreement successfully.

Advantages over double crux:

  • You don’t need both persons being familiar with the method.
  • It works even when there is no double crux. In my experience most disagreements are too complex for existence of double cruxes.
  • There is no algorithm to memorize and you don’t need a whiteboard.

Not to mention quick modeling is useful not just for resolving disagreements but also for making better decisions and, obviously, forming more correct models of the world.

Simple rationality

How to be rational? Model things explicitly, and become good at it. That’s it. To be able to build good models is not easy and requires some work, but the goal is simple.

When thinking about something, there are things to take into consideration. For example, if you are thinking about moving into a new apartment you may take into consideration the price of the current apartment, the price of the new apartment, how long your commute will be, do you like the neighborhood, how important each of those things is to you, etc. We can make a list of things to take into consideration, we can call that our ontology of the model. You can decompose a thing into smaller things, in similar way a mechanic would when he is repairing some equipment, or a physicist would do when they are thinking about systems. It’s not good to forget to think about things which are important. It’s also not good to think too much about things which are not important. So, there’s an optimal ontology which you should be aiming for. (math analogy: sets)

The next thing is to figure out which things are connected to other things. Of course, some of the connections will not be important so you don’t need to include them. (math analogy: graphs) It also may be important to note which things cause other things and which things just correlate with each other.

So, once you know which things are connected to each other, the next thing is to estimate how strong the effects are. If you climb to a tall mountain, the boiling point of water will be lower, and you draw a graph which has altitude at x axis and boiling point at y axis. If some things correlate, you can draw a graph in a similar way. (math analogy: functions)

Some things can’t be quantified that way, instead they can be either true or false. In that case, you can model the dependencies between them. (math analogy: logic)

For some things, you don’t know exactly how they depend on each other, so you need to model probabilities, if X is true, how probable is it that Y is true? (math analogy: basic probability theory)

Some things are more complicated, for example if you are building a model of what the temperature in your town will be 7 days from now, there are various possible outcomes, each of which has some probability. You can draw a graph with temperature on x axis and probability on y axis. (math analogy: probability distribution)

Let’s say you are making a decision and you can decide either A or B. You can imagine two different worlds, one in which you made decision A, other where you made decision B. How would the two worlds be different from each other? (math analogy: decision theory)

Making the decision model more realistic, for each decision there is a space of possibilities, and it’s important to know how that space looks like, which things are possible and which are not possible. Each of possibilities has some probability of happening. You can say for each decision there exists an probability distribution on the space of possibilities.

You can take into consideration the dynamics of things, how things change over time. (math analogy: mathematical analysis)

One more advanced example would be, for each thing in your ontology, it has a number of properties, let’s call that n. Each of those is a dimension, so the thing you are thinking about is a dot in an n-dimensional property-space. (linear algebra)

How many people build explicit models? Seems to me, not many. Except maybe in their professional life. Just by building an explicit model, once you understand the basic concepts and tools of how to do that, you get rid of almost all logical fallacies and cognitive biases. The first thing is to take into account which things are actually important and which are not, the basic ontology. How often do people even ask that question? It’s good to have correct models and bad to have incorrect ones, but the first step is to have models at all.

Almost any model you build is not going to be complete. You need to take that into account, and there already is a name for that: model uncertainty. Often a thing is just a black box you can’t yet understand, especially if you are reflecting on yourself and your emotions, in that cases intuition (or felt sense) has a crucial role to play. The same is obviously true for decisions which need to be made in a second, that’s not enough time to build a model. I’m not saying you need to replace all of your thinking with explicit modeling. When it comes to learning new things, figuring things out and problem solving, explicit modeling is necessary. It seems it would be good to have that skill developed well enough to have it ready for deployment at the level of reflex.

When you have a disagreement with someone, the reason is you don’t have the same models. If you simply explain what your model of the phenomena under discussion is, and how you came to that model, and the other person explains their model, it will be clear what the source of the disagreement is.

There are various specific techniques for becoming more rational, but a large part of those are simply examples of how you should substitute your intuition for explicit modeling. The same is the case for instrumental rationality, if you have a good model of yourself, of what you really want, of your actions, of the outcomes of your actions, of how to influence yourself to take those actions, etc. you will be able to steer yourself into taking the actions which will lead to the outcome you desire. The first thing is to build a good explicit model.

It’s good to know how our minds can fail us, what the most common cognitive biases and mistakes are. If you know what the failure pattern is, then you can learn to recognize and avoid that kind of failure in the future. There’s another way of avoiding mistakes: building good models. If you simply focus on the goal and how to achieve it, you can avoid cognitive biases without knowing they exist.

I’m not saying that knowing certain problem-solving techniques is bad, but becoming better at modeling things from first principles often beats learning a lot of specific techniques and failure patterns. Things may be different in professional setting where you do a highly-specialized job which requires a lot of specific techniques, but I’m talking about becoming more rational in general. Living your own specific life is not a wide area of expertise – it’s just you, facing many unique problems.

The other way general modeling beats specifics is the following example: there’s no benefit to driving faster if you’re driving in the wrong direction. Techniques give you velocity, modeling gives you direction. Especially if you go wide enough and model things from first principles, as Elon Musk calls it. It focuses you on the problem, not on actions. The problem of specific techniques is they are action-focused but to really solve a novel and unique problem you need to be problem-focused. Before you know what action to take, you need to understand the problem better. This is also known as Theory of Change in some circles. The effect of knowing where you are going is larger than the effect of anything else in getting you to your destination. The most important thing is prioritization.

Alternative to that is reasoning by analogy. The problem with analogies is they break down if the things you are analogizing are dissimilar enough. The more complex the problem is, the more complex the environment, for example you are talking about a project involving multiple people, the analogy will work less and less well. When talking about small simple systems, analogies can work but you need to take care to check if the analogy is valid or not. To do that you need explicit models: how are the two things similar and how are they different; is any difference important enough to break the analogy?

One way to improve your modeling capabilities is by practice. Simply, when you think about something, you can pay extra attention to various tools of modeling (ontology, graphs, functions, probabilities…) depending on which tool you want to be better at using. That way after certain amount of practice your brain will become better at it and it will become, if not automatic, then at least painless. Other thing which improves the ability is simply learning a lot of already existing models from various disciplines, in that way you gather more tools with which to build your own models. Areas of special interest for that purpose seem to be math, physics, computer science, and philosophy.

Humans have limited working memory, so it’s crucial to first focus on the most important things only. Later you can build on your model in an iterative way and make it more complex by decomposing things or introducing new things in the ontology. The problem with increasing complexity is, the brain is not good at nuance. If you include a lot of things in your ontology, where the most important thing is several orders of magnitude greater than the least important thing, the brain is not good at feeling that difference. By default all things seem to have the same importance and you need to consciously remind yourself of the relative importance of each thing. That’s the reason why after you have a lot of complexity built up, figured out what the important things are, you need to simplify to important things only.

This is all system 2 thinking, as it’s called in behavioral economics. There may be ways to debias yourself and get your system 1 more in line with system 2, but the best way I’ve found to do that is again, by building models and thinking about them. If you do that, your system 1 will with time fall more and more in line with system 2. Even if it doesn’t fall in line, you will learn to rely on models more in cases they work better. Bankers use exponential discounting (at least when the stakes are high, for money in professional setting) even if their system 1 is using hyperbolic discounting. The reason why they are able to be unbiased there is because they have a good model.

We can avoid a huge amount of irrationality by just focusing more on explicit model building. If every time we had a problem our first impulse was to build an explicit model of the problem, there would be a lot more rationality in the world. The problem in most cases is not that people have incorrect models, they don’t have explicit models at all and are relying on intuition and analogy instead. If it could in any way be said they have incorrect models the

Spreading the meme of “first principles” can be hard because of the common misunderstanding which says that when you build models you are reducing the thing you are modeling to a model. For example, when effective altruists try to estimate the cost of saving a life someone may say “you can’t put a dollar sign on a life” or something similar. This of course makes no sense because no one is really reducing the phenomena which are being modeled to a model. We all know that the real thing is more complicated than a model of the thing would be. Some models can be overly simple, but that is the problem of the model, not of the practice of modeling itself. If your model is too simple, just add the things which are missing. In practice the error of creating overly-complex models full of irrelevant things and with no important ones, seems just as common.

Of course, building good models is hard. Now that we at least know that modeling is a thing we are trying to do when we are trying to be rational, we can at least think about that in a principled way. We can build models about how to be better able to build better models.