Structure of disagreements

How many communicating civilizations are there in the visible universe? Alice thinks there is lot. Bob thinks we are the only one. Why? Things you can take into account are:

  • How many stars are there?
  • Of those, how many have planets?
  • Of those, how many planets on average can potentially support life?
  • Of those, on how many does life actually develop?
  • Of those, how many are intelligent enough to develop civilization?
  • Of those, how many are communicating through space?
  • How long are they sending the signals out?

This gives us the Drake equation. There is a lot of different ways to disagree about this question. Even if Alice thinks life develops on just 0.001 potential planets and Bob thinks it’s 0.005, Bob may as well think we are the only civilization and Alice may think there are a lot of civilizations out there.

Even if the models they have become more similar, with Alice and Bob both moving to 0.002, Alice has become even more sure of there being a lot of civilizations and Bob became even more sure we are the only one.

The fundamental reason for disagreements is they have different models of the world. If they could go parameter by parameter and agree on each, they would eventually come to an agreement. They could take the last one (of all planets which have developed life, how many develop civilization) and decompose that into some plausible steps which are needed for intelligence to develop – development of cells, cell nucleus (eukaryotes), sexual reproduction, multicellular life, big brains in tool-using animals, etc. (examples taken from Robin Hanson) That will sure give them a lot of things to talk about. When they solve all that, they can move on to the next part they disagree about.

The problem of course being, disagreements in real life aren’t that simple. In real life you don’t have an equation which two people insert numbers into so you can inspect the numbers and see which ones are different. In real life it’s hard to know what the root cause of the disagreement is. If only we had such models, just imagine how easier would our life be! Luckily, you can build such models. You can do it on the fly in the middle of a conversation and in most cases it takes just a few seconds. And I’m not just talking just about Fermi estimates. Let’s call this more general practice quick modeling.

First you need to do quick modeling of your own reasons for believing things and explain it to the other person. The better you are at quick modeling the better you can explain yourself. It would be great if the other person did the same, but if they don’t, we can do it for them. If you do it in a polite way, to the other person it will look like you are simply trying to understand them, which you in fact honestly are trying to do. For example, Bob thinks Trump is better than Hillary is on the question of immigration and you disagree about that. There are different reasons to be concerned about immigration and when Bob tells you his reasons, you can try to put what he said in simpler terms using some basic model. Just ask yourself for starters:

  • What are, in fact, the things we are talking about? In case of immigration they could be: the president, laws, immigrants, natives, the economy, crime, stability, culture, terrorism, etc.
  • What are the causal connections between things we are talking about? In case of immigration, for example some people may think immigrants influence the economy positively, some negatively.
  • What is the most important factor here? Of all the things listed, what is the most important factor to Bob? Is there some underlying reason for that? What does Bob in general think is important?

The goal is to take what the other person is saying in terms of intuitions, analogies and metaphors and transform it into a model. In the previous example, you can think of it as a Drake equation for immigration. Imagine the world 5 years from now, in one Trump makes decisions on immigration (world T), in one Hillary does (world H). Since the topic is only immigration, don’t imagine the whole presidency, just limit yourself to keeping everything the same and changing only the parameter under discussion. Which one is better, i.e. what is the difference in utility between those two worlds?

Bob told you the reason why he thinks what he thinks and you built a quick model of it. The next step is to present that model to the other person to see if you understood the other person correctly. If you get it wrong, you can work together to make it more accurate. When he agrees that you got the basic model right, that’s the first step, you understand a core of his model. He maybe didn’t think about the topic in that terms before, the model may have been implicit in his head, but now you have it in some explicit form. The first model you build will be oversimplified and Bob may add something to it, and you should also try to find other important things which should be added. Take the most important part you disagree about and decompose it further. When you solve all important sub-disagreements, you are done.

Let’s take a harder case. Alice voted for Trump because he will shake things up a bit. How to build a model from that? First step: what are we talking about? The world is currently in state X and after Trump it will be in state Y. In the process, the thing which will be shaken up is the US government, which affects the state we find ourselves in, and Alice thinks Y will be better than X. What sort of things get better when shaken up? If you roll a six sided dice and get 1, doing it again will probably get you a larger number. So if you think things are going really terrible right now, you will agree with Alice. (modeling with prospect theory gives the same result) What gets worse when shaken up? Systems which are in some good state. Also you may noticed that most systems get worse when shaken up, especially large complex systems with lots of parameters (high dimensionality) because there is more ways to do something wrong than to do it right. On the other hand, if the systems are intelligent and self-correcting, such systems sometimes end up in local optimums, so if you shake them up you can end up in a better state. To which degree does the US government have that property? What kind of system is that anyway? May it be the case that some parts of the US government will become better when shaken up and other become worse when shaken up? The better you are at quick modeling the better you can understand what the other person is saying.

When you notice the model has become too complex or you have gotten in too deep, you can simply return to the beginning of the disagreement, think a bit about what the next most important thing would be, and try that route. You can think of the disagreement as a tree:

tree

At root of the tree you have the original disagreement, the nodes below (B and C in the picture) are the things upon which the disagreement depends on (e.g. parameters in the Drake equation), and so on further, what you think about B depends on D and E, etc. You can disagree about many of those, but some are more important than others. One thing you need to be aware of is going too deep too fast. In the Drake equation there are 7 parameters and you may disagree right at the start at the first one, the number of stars. That parameter may depend on other 5 things and you may disagree on the first one of those, etc. Two hours later you may resolve your disagreement on the first parameter, but when you come to the second parameter you realize that the first disagreement was, in relative terms, insignificant. That’s why you should first build a simple first approximation of the whole model, and only after that go decomposing the model further. Don’t interrupt the other speaker so you could dive into a digression. Only after you have heard all of the basic parts you have enough information to know which parts are important enough to decompose them further. Avoid unnecessary digressions. The same principle holds not only for the listener but also for the speaker who should try to keep the first version of the model as simple as possible. When someone is digressing too much it may be hard to politely ask them to speed up, but it can often be done, especially with when debating with friends.

In some cases, there may be just one such part of the equation which dominates the whole thing, and you may disagree on that part, which reduces the disagreement about A to a disagreement about B. Consider yourself lucky, that’s a rare find, and you just simplified your disagreement. In that case, you can call B a double crux.

Applying the quick modeling technique often with other people will reveal the fact that models of the world people have can be very complex. It may take you a lot of time to come to an agreement and maybe you simply don’t have the time. Or you may need many conversations about different parts of the whole model and resolve each part separately before coming to full agreement. Some people are not just not interested in discussions on some topics, some people are not intellectually honest etc. – the usual limitations apply.

Things which may help:

  • Think of concrete examples. Simulate in your head “how would the world look like if X”. That can help you with decomposition, if you run a mental simulation you will see what things consist of, what is connected, what the mechanisms are.
  • Fermi estimates. Just thinking about how you would estimate a thing will get you into a modeling state of mind, and putting the actual numbers in will give you a sense the relative importance of things.
  • Ask yourself why do you believe something is true, and ask the same about the other person. You can say to them in a humble tone of voice “… that’s interesting, how did you reach that conclusion?” It’s important for you to actually want to know how did they reach that conclusion, which in fact you will want to know if you are doing quick modeling.
  • Simplify. When things get too complex, fix the value of some variable. This one is identical to advice from LessWrong so I will borrow an example from them: it’s often easier to answer questions like “How much of our next $10,000 should we spend on research, as opposed to advertising?” than to answer “Which is more important right now, research or advertising?” The other way of simplifying is to change just one value while holding other things constant, what economists call ceteris paribus. Simulate how the world would look like if just one variable changes.
  • Think about what the edge cases are, but pay attention to them only when they are important. Sometimes they are, mostly they are not. So, ignore them whenever you can. The other person is almost always talking about the average standard case and if you don’t agree about what happens on average there is no sense to discuss weird edge cases.
  • In complex topics: think about inferential distance. To take the example from LessWrong: explaining the evidence for the theory of evolution to a physicist would be easy; even if the physicist didn’t already know about evolution, they would understand the concepts of evidence, Occam’s razor, [etc… while] explaining the evidence for the theory of evolution to someone without a science background would be much harder. There may be some fundamental concepts which the other person is using and you are not familiar with, ask about that, think about what you may learn from the other person. Also, try to notice if the other person doesn’t understand some basic concept you are using and try to, in a polite non-condescending way, clarify what you mean.
  • In complex topics: think about difference of values. If you think vanilla tastes better than chocolate and the other person disagrees, that’s a difference of values. You should separate that from the model of how the external world works, and focus on talking about the world first. In most cases it makes no sense to talk about values of different outcomes when you disagree even about what the outcome will be. What sometimes looks like a difference of values is often connected to what you think about how the world works. Talk about values only if you seem to completely agree about all of the relevant aspects of the external-world-model. When talking about values, also do quick modeling, as usual.
  • Practice.
  • And most important for last: become better at quick modeling in general.

There was a post on LessWrong about a method called Double Crux for resolving disagreements but I think quick modeling is a better more general method. Also, in the Double Crux blog post some things were mentioned which I think you should not worry about:

  • Epistemic humility, good faith, confidence in the existence of objective truth, curiosity and/or a desire to uncover truth, etc. It is not usually the case those are the problem. If at some point in the discussion it turns out some of those are a problem it will be crystal clear to you and you can just stop the discussion right there, but don’t worry about that beforehand.
  • Noticing of subtle tastes, focusing and other resonance checks, etc. Instead on focusing on how your mind may be making errors and introducing subtle biases, do what scientists and engineers did for centuries and what works best: look at the world and build models of how the world works. When Elon Musk is designing batteries he doesn’t need to think about subtle biases of his mind, he needs to be thinking about the model of the battery and how to improve it. When bankers are determining interest rates they don’t need to think about how their minds do hyperbolic discounting, they can simply replace that with an exponential discounting. The same goes with disagreements, you need to build a model of what you think about the topic, of what the other person thinks, and decompose each sub-disagreement successfully.

Advantages over double crux:

  • You don’t need both persons being familiar with the method.
  • It works even when there is no double crux. In my experience most disagreements are too complex for existence of double cruxes.
  • There is no algorithm to memorize and you don’t need a whiteboard.

Not to mention quick modeling is useful not just for resolving disagreements but also for making better decisions and, obviously, forming more correct models of the world.

Advertisements

One thought on “Structure of disagreements

  1. entirelyuseless

    “Epistemic humility, good faith, confidence in the existence of objective truth, curiosity and/or a desire to uncover truth, etc. It is not usually the case those are the problem.”

    I disagree. These things exist in degrees, not simply as “yes it is there or no it is not.” So two people will have them to different degrees, and this usually causes impediments to agreement. If you are going to “just stop the discussion right there” as soon as you see any problem resulting from these things, then you shouldn’t even start the discussion in the first place, because they will always be problems.

    In fact I frequently don’t start a discussion because I know these things are going to be issues, and online I frequently don’t respond to the very first response to my comment, because it is evident from their first comment that these things are issues.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s