Simple rationality

How to be rational? Model things explicitly, and become good at it. That’s it. To be able to build good models is not easy and requires some work, but the goal is simple.

When thinking about something, there are things to take into consideration. For example, if you are thinking about moving into a new apartment you may take into consideration the price of the current apartment, the price of the new apartment, how long your commute will be, do you like the neighborhood, how important each of those things is to you, etc. We can make a list of things to take into consideration, we can call that our ontology of the model. You can decompose a thing into smaller things, in similar way a mechanic would when he is repairing some equipment, or a physicist would do when they are thinking about systems. It’s not good to forget to think about things which are important. It’s also not good to think too much about things which are not important. So, there’s an optimal ontology which you should be aiming for. (math analogy: sets)

The next thing is to figure out which things are connected to other things. Of course, some of the connections will not be important so you don’t need to include them. (math analogy: graphs) It also may be important to note which things cause other things and which things just correlate with each other.

So, once you know which things are connected to each other, the next thing is to estimate how strong the effects are. If you climb to a tall mountain, the boiling point of water will be lower, and you draw a graph which has altitude at x axis and boiling point at y axis. If some things correlate, you can draw a graph in a similar way. (math analogy: functions)

Some things can’t be quantified that way, instead they can be either true or false. In that case, you can model the dependencies between them. (math analogy: logic)

For some things, you don’t know exactly how they depend on each other, so you need to model probabilities, if X is true, how probable is it that Y is true? (math analogy: basic probability theory)

Some things are more complicated, for example if you are building a model of what the temperature in your town will be 7 days from now, there are various possible outcomes, each of which has some probability. You can draw a graph with temperature on x axis and probability on y axis. (math analogy: probability distribution)

Let’s say you are making a decision and you can decide either A or B. You can imagine two different worlds, one in which you made decision A, other where you made decision B. How would the two worlds be different from each other? (math analogy: decision theory)

Making the decision model more realistic, for each decision there is a space of possibilities, and it’s important to know how that space looks like, which things are possible and which are not possible. Each of possibilities has some probability of happening. You can say for each decision there exists an probability distribution on the space of possibilities.

You can take into consideration the dynamics of things, how things change over time. (math analogy: mathematical analysis)

One more advanced example would be, for each thing in your ontology, it has a number of properties, let’s call that n. Each of those is a dimension, so the thing you are thinking about is a dot in an n-dimensional property-space. (linear algebra)

How many people build explicit models? Seems to me, not many. Except maybe in their professional life. Just by building an explicit model, once you understand the basic concepts and tools of how to do that, you get rid of almost all logical fallacies and cognitive biases. The first thing is to take into account which things are actually important and which are not, the basic ontology. How often do people even ask that question? It’s good to have correct models and bad to have incorrect ones, but the first step is to have models at all.

Almost any model you build is not going to be complete. You need to take that into account, and there already is a name for that: model uncertainty. Often a thing is just a black box you can’t yet understand, especially if you are reflecting on yourself and your emotions, in that cases intuition (or felt sense) has a crucial role to play. The same is obviously true for decisions which need to be made in a second, that’s not enough time to build a model. I’m not saying you need to replace all of your thinking with explicit modeling. When it comes to learning new things, figuring things out and problem solving, explicit modeling is necessary. It seems it would be good to have that skill developed well enough to have it ready for deployment at the level of reflex.

When you have a disagreement with someone, the reason is you don’t have the same models. If you simply explain what your model of the phenomena under discussion is, and how you came to that model, and the other person explains their model, it will be clear what the source of the disagreement is.

There are various specific techniques for becoming more rational, but a large part of those are simply examples of how you should substitute your intuition for explicit modeling. The same is the case for instrumental rationality, if you have a good model of yourself, of what you really want, of your actions, of the outcomes of your actions, of how to influence yourself to take those actions, etc. you will be able to steer yourself into taking the actions which will lead to the outcome you desire. The first thing is to build a good explicit model.

It’s good to know how our minds can fail us, what the most common cognitive biases and mistakes are. If you know what the failure pattern is, then you can learn to recognize and avoid that kind of failure in the future. There’s another way of avoiding mistakes: building good models. If you simply focus on the goal and how to achieve it, you can avoid cognitive biases without knowing they exist.

I’m not saying that knowing certain problem-solving techniques is bad, but becoming better at modeling things from first principles often beats learning a lot of specific techniques and failure patterns. Things may be different in professional setting where you do a highly-specialized job which requires a lot of specific techniques, but I’m talking about becoming more rational in general. Living your own specific life is not a wide area of expertise – it’s just you, facing many unique problems.

The other way general modeling beats specifics is the following example: there’s no benefit to driving faster if you’re driving in the wrong direction. Techniques give you velocity, modeling gives you direction. Especially if you go wide enough and model things from first principles, as Elon Musk calls it. It focuses you on the problem, not on actions. The problem of specific techniques is they are action-focused but to really solve a novel and unique problem you need to be problem-focused. Before you know what action to take, you need to understand the problem better. This is also known as Theory of Change in some circles. The effect of knowing where you are going is larger than the effect of anything else in getting you to your destination. The most important thing is prioritization.

Alternative to that is reasoning by analogy. The problem with analogies is they break down if the things you are analogizing are dissimilar enough. The more complex the problem is, the more complex the environment, for example you are talking about a project involving multiple people, the analogy will work less and less well. When talking about small simple systems, analogies can work but you need to take care to check if the analogy is valid or not. To do that you need explicit models: how are the two things similar and how are they different; is any difference important enough to break the analogy?

One way to improve your modeling capabilities is by practice. Simply, when you think about something, you can pay extra attention to various tools of modeling (ontology, graphs, functions, probabilities…) depending on which tool you want to be better at using. That way after certain amount of practice your brain will become better at it and it will become, if not automatic, then at least painless. Other thing which improves the ability is simply learning a lot of already existing models from various disciplines, in that way you gather more tools with which to build your own models. Areas of special interest for that purpose seem to be math, physics, computer science, and philosophy.

Humans have limited working memory, so it’s crucial to first focus on the most important things only. Later you can build on your model in an iterative way and make it more complex by decomposing things or introducing new things in the ontology. The problem with increasing complexity is, the brain is not good at nuance. If you include a lot of things in your ontology, where the most important thing is several orders of magnitude greater than the least important thing, the brain is not good at feeling that difference. By default all things seem to have the same importance and you need to consciously remind yourself of the relative importance of each thing. That’s the reason why after you have a lot of complexity built up, figured out what the important things are, you need to simplify to important things only.

This is all system 2 thinking, as it’s called in behavioral economics. There may be ways to debias yourself and get your system 1 more in line with system 2, but the best way I’ve found to do that is again, by building models and thinking about them. If you do that, your system 1 will with time fall more and more in line with system 2. Even if it doesn’t fall in line, you will learn to rely on models more in cases they work better. Bankers use exponential discounting (at least when the stakes are high, for money in professional setting) even if their system 1 is using hyperbolic discounting. The reason why they are able to be unbiased there is because they have a good model.

We can avoid a huge amount of irrationality by just focusing more on explicit model building. If every time we had a problem our first impulse was to build an explicit model of the problem, there would be a lot more rationality in the world. The problem in most cases is not that people have incorrect models, they don’t have explicit models at all and are relying on intuition and analogy instead. If it could in any way be said they have incorrect models the

Spreading the meme of “first principles” can be hard because of the common misunderstanding which says that when you build models you are reducing the thing you are modeling to a model. For example, when effective altruists try to estimate the cost of saving a life someone may say “you can’t put a dollar sign on a life” or something similar. This of course makes no sense because no one is really reducing the phenomena which are being modeled to a model. We all know that the real thing is more complicated than a model of the thing would be. Some models can be overly simple, but that is the problem of the model, not of the practice of modeling itself. If your model is too simple, just add the things which are missing. In practice the error of creating overly-complex models full of irrelevant things and with no important ones, seems just as common.

Of course, building good models is hard. Now that we at least know that modeling is a thing we are trying to do when we are trying to be rational, we can at least think about that in a principled way. We can build models about how to be better able to build better models.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s