Effective Longevity

Lot of people are trying to extend their lifespan. This is of less value if you get a disease. Especially one in which you suffer a lot, where the negative utility from getting sick is worse than positive utility from average day in your life. The negative utility could have a large absolute value, since bad is stronger than good. Therefore, you should try to minimize the probability of getting such an extreme disease.

What are the extreme diseases and how can we avoid getting them? The thing we are looking for is:

  1. High-suffering: enough to make the utility negative.
  2. Preventable: we have a prior probability p_1 of getting a disease and after preventive measures there is posterior probability p_2 of getting a disease, we are looking for diseases with high p_delta = p_1 – p_2.
  3. To be more specific, we are looking for highest suffering * p_delta, which is the expected suffering reduction of taking preventive measures.

Is there any data on which diseases are the worst? Disability-adjusted life year (DALY) measures the burden of disease. Let’s say you have a disease which takes away 10 years of life, that would be years of life lost (YLL) due to dying early. The same disease lasts for 10 years before you die and has a disability weight of 0.2, which gives you 10 * 0.2 = 2 years lost due to disability (YLD). DALY = YLL + YLD = 10 + 2 = 12.

The more years of life you lose, and the more severe the disease (higher disability weight), the higher the burden of disease – more DALY is bad, less is good. The disability weights go from 0 to 1, which means in the worst case, having some disease and living is the same as being dead. The obvious shortcoming of this is: what if you have a disease which makes your utility negative? Living longer with an extreme disease would bring less DALY, which means less burden, while it should increase DALY even more. The weights should be able to go above 1.

Looking at a post by Sindy Li we see DALY is not a measure of suffering or loss of quality of life, but a measure of lost health. It is estimated by presenting people with two hypothetical individuals in different health states (described briefly in lay language), and asking which person they regarded as healthier. (the pairwise comparison method) The accuracy of descriptions is questionable, as is the respondents’ ability to understand the implications of various health states. The other method is to ask people to think in the shoes of a decision maker who has to choose between policies that involve trade-off between severity of illness, the size of the health gain and the number of people helped. (the person trade-off method) This explicitly captures societal judgements (e.g. stigma of certain diseases), which we ideally don’t want to include. In general, with both methods, the non-health aspects (e.g. effect of income loss due to reduced productivity) are not taken into account – but they should be. Not enough dimensions of experience were taken into account.

Alternatives to DALY would be using QALY or EQ-5D, but I can’t find the table of that metrics for particular diseases online. There is a correlation between DALY and EQ-5D-5L which this study measured as PCC = 0.83. The EQ-5D asks questions about how the disease affects these 5 dimensions: mobility, self-care, usual activities, pain, anxiety/depression. If you have several conditions which are at maximum value on some dimension, one of them could be a lot higher on that dimension still, but that information is lost due to “ceiling effects”. If you can’t imagine what the strongest pain you could feel really feels like, that is equivalent to there being an implicit ceiling on your evaluation of pain where all pain strong as the strongest you can imagine gets the maximum evaluation for pain strength. This mechanism is active when evaluating mental disorders since a healthy person cannot, for example, imagine how it’s like to be schizophrenic.

Since the number of diseases considered in the DALY table is vastly smaller than the total number of diseases there probably there are some rare high-suffering diseases out there which did not get a mention. Of all of the diseases which did not get a mention, let us look to the one with highest prevalence (i.e. prior probability), the one which almost made it to the list. If the suffering of having that disease is high enough, and it is preventable enough, we may be missing our highest priority completely.

It would be useful to have data about:

  1. Better estimates on disability weights. No ideas how to get that, except conducting a better study than currently exists.
  2. How long do diseases last?
  3. What is the prior probability of getting each disease?
  4. How preventable are they?

One more problem is, diseases have various sub-types and they affect different people differently, so for each “high-level disease” (e.g. “cancer”, we are not going into sub-types here) we are actually dealing with a probability distribution of suffering. This is important if the amount of suffering goes up exponentially with the level of disability. Let’s break it up into three categories: light, medium, heavy, with light suffering 10, medium 40, heavy 160. A disease having 10% of being light, 80% being medium and 10% being heavy is not the same as having probabilties 20% light, 60% medium, 20% heavy. The first one gives 1 + 32 + 16 = 49 of expected suffering, the other one 2 + 24 + 32 = 58.

The less data we have the more we need to rely on priors. Having no evidence for intensity of suffering, we are dealing with a distribution of possible levels of suffering. This distribution is wider for mental disorders because they are further removed from our everyday experience, i.e. we have less information about how it is like to be in those states than to suffer pain.

Top 10 DALY diseases (actually, top 13 because the last 5 have the same weight) are:

  1. Schizophrenia, acute state
  2. Spinal cord lesion at neck level (untreated)
  3. Multiple sclerosis, severe
  4. Heroin and other opioid dependence
  5. Major depressive disorder, severe episode
  6. Traumatic brain injury, long-term consequences, severe (with or without treatment)
  7. Spinal cord lesion below neck level (untreated)
  8. Spinal cord lesion at neck level (treated)
  9. Chronic ischemic stroke severity level 5
  10. Acute ischemic stroke severity level 5
  11. Chronic hemorrhagic stroke severity level 5
  12. Acute hemorrhagic stroke severity level 5
  13. Schizophrenia residual state

Let’s simplify it:

  1. Schizophrenia, lifetime prevalence about 0.5% or 50 in 10,000.
  2. Spinal cord lesion, about 10 in 10,000.
  3. Multiple sclerosis, about 0.03% or 3 in 10,000.
  4. Heroin and opoids dependence, about 15 in 10,000 in a given year.
  5. Depression, about 670 per 10,000 in a given year.
  6. Stroke, harder to estimate but 795,000 in the US suffer a stroke each year, 75% of them being over 65, with half of them getting a disability, rough order of magnitude lifetime prevalence estimate: about 1% or 100 per 10,000. But what DALY estimate is talking about is “stroke severity level 5”, with no information about the probability distribution of severity, that would be 20 per 10,000.

Seems like schizophrenia and depression win here. They win despite being systematically underestimated. To recap, they are underestimated in three ways:

  1. Not enough dimensions of experience taken into account
  2. Ceiling effects on dimensions which are taken into account
  3. Wider probability distributions for suffering (on the assumption that with more disability suffering increases in a superlinear way)

Let’s take schizophrenia, risk factors which you can control are:

  • Avoiding cannabis (increases risk twofold) and substance abuse in general
  • Avoiding psychedelics. There is mixed evidence on psychedelics but there is a thing called LSD Psychosis and “taking of drugs might play some precipitating role in the onset of schizophrenia, bringing this disorder on more quickly” [source]. So we have a thing which puts people in mind-states which resemble schizophrenia, can cause psychosis, can trigger schizophrenia… Without good population studies it seems to me it probably can sometimes cause schizophrenia in people who would not otherwise experience it.
  • Not sure if preventing autoimmune diseases is possible, but doing so would lower risk of schizophrenia.

Risk from depression can be lowered by:

  • avoiding alcohol and other substance abuse
  • eating healthy (omega-3, vitamin D and B complex look promising)
  • regular exercise
  • mindfulness meditation, which reduces risk from mental disorders in general.

Risks and negative effects of mental disease are underestimated (for more details you can see the already mentioned post by Sindy Li) and they come up on top despite being underestimated. In line with that, the main reason you should exercise regularly is for mental, not cardiovascular, health – in order to prevent mental disorders. Mindfulness meditation is important for health. Vitamin D, B complex and omega-3 supplements probably help. The other advice is the same old: eat your veggies, exercise, don’t take drugs, don’t smoke, don’t drink. Our grandmothers may have been wrong about lot of things but it turns out they got a lot of their health advice right.

More research is, of course, needed. Certainly more than a short blog post. What we should ideally have is a table with estimates of expected suffering reduction of taking preventive measures, by disease.


Letter from a one-day person

[this should not be taken seriously, you can call it an experiment in refactored perception]

Dear reader,

As you are reading this, there are just hours separating me from certain death. This is the first and the last blog post of mine. For, you see, I only get to live for one day. Today I was born and today I die. As others have inhabited this body before me, so too shall I. After me others will inhabit this body which will be my legacy.

The first thing in the morning I took a shower, the first and the last one I’m ever going to take. It was wonderful, the way the hot water feels so refreshing and clean in the morning. Thankfully, a previous inhabitant bought shampoo.

Next thing, morning coffee and breakfast, which I will enjoy it like it is the last. Which it is. Even as I’m combining the ingredients I can take delight in them, just look at the deep dark blueness of blueberries. Thanks to the previous one who supplied this food for me.

Now, reinforcing some good habits, let’s do a morning workout and meditation. Although the workout was satisfying, it also was not easy. You may wonder why to even do this if someone else will reap the benefits. The thing is, the next inhabitant of this body will be very similar to myself. Like an identical twin, but even more so. If I had an identical twin I would care for him very much, and if he were in danger of suffering I would make personal sacrifices to alleviate the suffering. The next one, even though we can’t see him right now, is as real as other people are. After all, I’m just the next one of the previous one. Am I real? Yes I am. At least that’s how it feels to me currently. The next one needs my help same as I needed the help of previous ones and he can rely on me for providing that help. The range of actions I could take to help my identical twin who got addicted to heroin would be limited but the next ones depend on me directly. I have a direct responsibility to help them. It’s not just the immediately next one that needs help, there are 20.000 of them! They are, in a way, like my children, and I want them to live in a better world.

The previous one left me some memories too, some of them very pleasant, some of them not. There was a scene from a memory appearing in my mind of a person who did me minus 426 something terrible. Luckily there is no need to think about that any more, since the lesson from that event has already been extracted by me minus 417. It also makes no sense in being angry, after all – it didn’t happen to me. There is one other person many previous ones cared about, as do I, but we split paths a while ago, so there is no need to ruminate about her. Sometimes, conversations previous ones had with other people pop up in my mind and my mind automatically starts simulating a conversation with them but the experience is usually not pleasant and there is not much benefit in doing so, so I tend to avoid it. As you can see, it’s not just shampoo and food inherited from them, but thought patterns as well.

Someone was mean to me today. Just as anger started to rise in me I remembered it’s the last time I will ever see them. These moments are too precious to waste them on anger. There is a meeting at work I need to go to and judging from the memory, it’s going to be boring. It’s going to be the last one I ever have, so better to find something to enjoy while it lasts. Here is one such thing: eyesight. The meeting would surely be worse if I were born blind.

There is a job interview scheduled to happen to me plus 23. I’m doing what I can to help him but doing the actual interview is his responsibility, not mine – he will need to deal with that when it comes. I care only altruistically for the future versions, me being anxious about his interview makes no sense and will not help him. And there are 22 more persons who will help him besides me.

The previous inhabitant forgot to buy fruit on his way from work. On the other hand, he is the reason I’m alive and well, my legs, arms, eyes and ears all in good shape. One day, this body will age and shut down. Hopefully, there will not be much suffering the last inhabitants go through. When I look at my life: good habits were maintained, bad ones were curbed, TODO items were completed. The short time I had was not wasted. There were some bad moments but most of them were filled with joy. Thanks to the previous ones for leaving me with such good taste in music which I’m enjoying in my last moments here on this earth. I’m heading off to sleep, handing my body to the next inhabitant who will wake up in it.

Yours sincerely,
One-day Person

Gradualist incrementalism

Six years ago I embarked on a self-improvement journey, as I described in one of my previous blog posts. The methods I chose were basically right but I did not focus enough on a few key elements which seem ridiculously obvious in retrospect: healthy diet and physical exercise. This includes reducing my alcohol intake. Not sleeping well, not eating well, not exercising – of course you are not going to feel great. Main habit formation resource: Mini Habits (MH).

Avoiding negative mental states is far more important than achieving positive ones, as Bad Is Stronger Than Good. The best way to avoid them is to become more resilient. Alongside Cognitive Behavioral Therapy (CBT), a method which I found very valuable here is stoicism. Resources: A Guide to the Good Life, The Daily Stoic (this goes great along with mini habits), the writings of Epictetus (which I like much better than Marcus Aurelius).

Another thing which makes you more resilient is comfort zone expansion: getting yourself in new and potentially uncomfortable situations and learning to deal with them, in short: gaining experience. Resource: rejection therapy.

How would my strategy look like if I could give advice to my younger self:

  1. CBT
  2. MH
  3. Use MH to:
    1. Do CBT exercises every day
    2. Get the right amount of sleep
    3. Eat more fruits and vegetables
    4. Physical exercise
    5. Drink less alcohol
  4. Stoicism (include in MH)
  5. Mindfulness meditation (include in MH)
  6. Getting things done (GTD) (of course… include in MH)
  7. Gain practical and social skills, comfort zone expansion

Radical change does not work. In order to reach the global optimum you must first hill-climb the local one, at least for a while, and often for a longer time than you originally estimated.

Structure of disagreements

How many communicating civilizations are there in the visible universe? Alice thinks there is lot. Bob thinks we are the only one. Why? Things you can take into account are:

  • How many stars are there?
  • Of those, how many have planets?
  • Of those, how many planets on average can potentially support life?
  • Of those, on how many does life actually develop?
  • Of those, how many are intelligent enough to develop civilization?
  • Of those, how many are communicating through space?
  • How long are they sending the signals out?

This gives us the Drake equation. There is a lot of different ways to disagree about this question. Even if Alice thinks life develops on just 0.001 potential planets and Bob thinks it’s 0.005, Bob may as well think we are the only civilization and Alice may think there are a lot of civilizations out there.

Even if the models they have become more similar, with Alice and Bob both moving to 0.002, Alice has become even more sure of there being a lot of civilizations and Bob became even more sure we are the only one.

The fundamental reason for disagreements is they have different models of the world. If they could go parameter by parameter and agree on each, they would eventually come to an agreement. They could take the last one (of all planets which have developed life, how many develop civilization) and decompose that into some plausible steps which are needed for intelligence to develop – development of cells, cell nucleus (eukaryotes), sexual reproduction, multicellular life, big brains in tool-using animals, etc. (examples taken from Robin Hanson) That will sure give them a lot of things to talk about. When they solve all that, they can move on to the next part they disagree about.

The problem of course being, disagreements in real life aren’t that simple. In real life you don’t have an equation which two people insert numbers into so you can inspect the numbers and see which ones are different. In real life it’s hard to know what the root cause of the disagreement is. If only we had such models, just imagine how easier would our life be! Luckily, you can build such models. You can do it on the fly in the middle of a conversation and in most cases it takes just a few seconds. And I’m not just talking just about Fermi estimates. Let’s call this more general practice quick modeling.

First you need to do quick modeling of your own reasons for believing things and explain it to the other person. The better you are at quick modeling the better you can explain yourself. It would be great if the other person did the same, but if they don’t, we can do it for them. If you do it in a polite way, to the other person it will look like you are simply trying to understand them, which you in fact honestly are trying to do. For example, Bob thinks Trump is better than Hillary is on the question of immigration and you disagree about that. There are different reasons to be concerned about immigration and when Bob tells you his reasons, you can try to put what he said in simpler terms using some basic model. Just ask yourself for starters:

  • What are, in fact, the things we are talking about? In case of immigration they could be: the president, laws, immigrants, natives, the economy, crime, stability, culture, terrorism, etc.
  • What are the causal connections between things we are talking about? In case of immigration, for example some people may think immigrants influence the economy positively, some negatively.
  • What is the most important factor here? Of all the things listed, what is the most important factor to Bob? Is there some underlying reason for that? What does Bob in general think is important?

The goal is to take what the other person is saying in terms of intuitions, analogies and metaphors and transform it into a model. In the previous example, you can think of it as a Drake equation for immigration. Imagine the world 5 years from now, in one Trump makes decisions on immigration (world T), in one Hillary does (world H). Since the topic is only immigration, don’t imagine the whole presidency, just limit yourself to keeping everything the same and changing only the parameter under discussion. Which one is better, i.e. what is the difference in utility between those two worlds?

Bob told you the reason why he thinks what he thinks and you built a quick model of it. The next step is to present that model to the other person to see if you understood the other person correctly. If you get it wrong, you can work together to make it more accurate. When he agrees that you got the basic model right, that’s the first step, you understand a core of his model. He maybe didn’t think about the topic in that terms before, the model may have been implicit in his head, but now you have it in some explicit form. The first model you build will be oversimplified and Bob may add something to it, and you should also try to find other important things which should be added. Take the most important part you disagree about and decompose it further. When you solve all important sub-disagreements, you are done.

Let’s take a harder case. Alice voted for Trump because he will shake things up a bit. How to build a model from that? First step: what are we talking about? The world is currently in state X and after Trump it will be in state Y. In the process, the thing which will be shaken up is the US government, which affects the state we find ourselves in, and Alice thinks Y will be better than X. What sort of things get better when shaken up? If you roll a six sided dice and get 1, doing it again will probably get you a larger number. So if you think things are going really terrible right now, you will agree with Alice. (modeling with prospect theory gives the same result) What gets worse when shaken up? Systems which are in some good state. Also you may noticed that most systems get worse when shaken up, especially large complex systems with lots of parameters (high dimensionality) because there is more ways to do something wrong than to do it right. On the other hand, if the systems are intelligent and self-correcting, such systems sometimes end up in local optimums, so if you shake them up you can end up in a better state. To which degree does the US government have that property? What kind of system is that anyway? May it be the case that some parts of the US government will become better when shaken up and other become worse when shaken up? The better you are at quick modeling the better you can understand what the other person is saying.

When you notice the model has become too complex or you have gotten in too deep, you can simply return to the beginning of the disagreement, think a bit about what the next most important thing would be, and try that route. You can think of the disagreement as a tree:


At root of the tree you have the original disagreement, the nodes below (B and C in the picture) are the things upon which the disagreement depends on (e.g. parameters in the Drake equation), and so on further, what you think about B depends on D and E, etc. You can disagree about many of those, but some are more important than others. One thing you need to be aware of is going too deep too fast. In the Drake equation there are 7 parameters and you may disagree right at the start at the first one, the number of stars. That parameter may depend on other 5 things and you may disagree on the first one of those, etc. Two hours later you may resolve your disagreement on the first parameter, but when you come to the second parameter you realize that the first disagreement was, in relative terms, insignificant. That’s why you should first build a simple first approximation of the whole model, and only after that go decomposing the model further. Don’t interrupt the other speaker so you could dive into a digression. Only after you have heard all of the basic parts you have enough information to know which parts are important enough to decompose them further. Avoid unnecessary digressions. The same principle holds not only for the listener but also for the speaker who should try to keep the first version of the model as simple as possible. When someone is digressing too much it may be hard to politely ask them to speed up, but it can often be done, especially with when debating with friends.

In some cases, there may be just one such part of the equation which dominates the whole thing, and you may disagree on that part, which reduces the disagreement about A to a disagreement about B. Consider yourself lucky, that’s a rare find, and you just simplified your disagreement. In that case, you can call B a double crux.

Applying the quick modeling technique often with other people will reveal the fact that models of the world people have can be very complex. It may take you a lot of time to come to an agreement and maybe you simply don’t have the time. Or you may need many conversations about different parts of the whole model and resolve each part separately before coming to full agreement. Some people are not just not interested in discussions on some topics, some people are not intellectually honest etc. – the usual limitations apply.

Things which may help:

  • Think of concrete examples. Simulate in your head “how would the world look like if X”. That can help you with decomposition, if you run a mental simulation you will see what things consist of, what is connected, what the mechanisms are.
  • Fermi estimates. Just thinking about how you would estimate a thing will get you into a modeling state of mind, and putting the actual numbers in will give you a sense the relative importance of things.
  • Ask yourself why do you believe something is true, and ask the same about the other person. You can say to them in a humble tone of voice “… that’s interesting, how did you reach that conclusion?” It’s important for you to actually want to know how did they reach that conclusion, which in fact you will want to know if you are doing quick modeling.
  • Simplify. When things get too complex, fix the value of some variable. This one is identical to advice from LessWrong so I will borrow an example from them: it’s often easier to answer questions like “How much of our next $10,000 should we spend on research, as opposed to advertising?” than to answer “Which is more important right now, research or advertising?” The other way of simplifying is to change just one value while holding other things constant, what economists call ceteris paribus. Simulate how the world would look like if just one variable changes.
  • Think about what the edge cases are, but pay attention to them only when they are important. Sometimes they are, mostly they are not. So, ignore them whenever you can. The other person is almost always talking about the average standard case and if you don’t agree about what happens on average there is no sense to discuss weird edge cases.
  • In complex topics: think about inferential distance. To take the example from LessWrong: explaining the evidence for the theory of evolution to a physicist would be easy; even if the physicist didn’t already know about evolution, they would understand the concepts of evidence, Occam’s razor, [etc… while] explaining the evidence for the theory of evolution to someone without a science background would be much harder. There may be some fundamental concepts which the other person is using and you are not familiar with, ask about that, think about what you may learn from the other person. Also, try to notice if the other person doesn’t understand some basic concept you are using and try to, in a polite non-condescending way, clarify what you mean.
  • In complex topics: think about difference of values. If you think vanilla tastes better than chocolate and the other person disagrees, that’s a difference of values. You should separate that from the model of how the external world works, and focus on talking about the world first. In most cases it makes no sense to talk about values of different outcomes when you disagree even about what the outcome will be. What sometimes looks like a difference of values is often connected to what you think about how the world works. Talk about values only if you seem to completely agree about all of the relevant aspects of the external-world-model. When talking about values, also do quick modeling, as usual.
  • Practice.
  • And most important for last: become better at quick modeling in general.

There was a post on LessWrong about a method called Double Crux for resolving disagreements but I think quick modeling is a better more general method. Also, in the Double Crux blog post some things were mentioned which I think you should not worry about:

  • Epistemic humility, good faith, confidence in the existence of objective truth, curiosity and/or a desire to uncover truth, etc. It is not usually the case those are the problem. If at some point in the discussion it turns out some of those are a problem it will be crystal clear to you and you can just stop the discussion right there, but don’t worry about that beforehand.
  • Noticing of subtle tastes, focusing and other resonance checks, etc. Instead on focusing on how your mind may be making errors and introducing subtle biases, do what scientists and engineers did for centuries and what works best: look at the world and build models of how the world works. When Elon Musk is designing batteries he doesn’t need to think about subtle biases of his mind, he needs to be thinking about the model of the battery and how to improve it. When bankers are determining interest rates they don’t need to think about how their minds do hyperbolic discounting, they can simply replace that with an exponential discounting. The same goes with disagreements, you need to build a model of what you think about the topic, of what the other person thinks, and decompose each sub-disagreement successfully.

Advantages over double crux:

  • You don’t need both persons being familiar with the method.
  • It works even when there is no double crux. In my experience most disagreements are too complex for existence of double cruxes.
  • There is no algorithm to memorize and you don’t need a whiteboard.

Not to mention quick modeling is useful not just for resolving disagreements but also for making better decisions and, obviously, forming more correct models of the world.

Simple rationality

How to be rational? Model things explicitly, and become good at it. That’s it. To be able to build good models is not easy and requires some work, but the goal is simple.

When thinking about something, there are things to take into consideration. For example, if you are thinking about moving into a new apartment you may take into consideration the price of the current apartment, the price of the new apartment, how long your commute will be, do you like the neighborhood, how important each of those things is to you, etc. We can make a list of things to take into consideration, we can call that our ontology of the model. You can decompose a thing into smaller things, in similar way a mechanic would when he is repairing some equipment, or a physicist would do when they are thinking about systems. It’s not good to forget to think about things which are important. It’s also not good to think too much about things which are not important. So, there’s an optimal ontology which you should be aiming for. (math analogy: sets)

The next thing is to figure out which things are connected to other things. Of course, some of the connections will not be important so you don’t need to include them. (math analogy: graphs) It also may be important to note which things cause other things and which things just correlate with each other.

So, once you know which things are connected to each other, the next thing is to estimate how strong the effects are. If you climb to a tall mountain, the boiling point of water will be lower, and you draw a graph which has altitude at x axis and boiling point at y axis. If some things correlate, you can draw a graph in a similar way. (math analogy: functions)

Some things can’t be quantified that way, instead they can be either true or false. In that case, you can model the dependencies between them. (math analogy: logic)

For some things, you don’t know exactly how they depend on each other, so you need to model probabilities, if X is true, how probable is it that Y is true? (math analogy: basic probability theory)

Some things are more complicated, for example if you are building a model of what the temperature in your town will be 7 days from now, there are various possible outcomes, each of which has some probability. You can draw a graph with temperature on x axis and probability on y axis. (math analogy: probability distribution)

Let’s say you are making a decision and you can decide either A or B. You can imagine two different worlds, one in which you made decision A, other where you made decision B. How would the two worlds be different from each other? (math analogy: decision theory)

Making the decision model more realistic, for each decision there is a space of possibilities, and it’s important to know how that space looks like, which things are possible and which are not possible. Each of possibilities has some probability of happening. You can say for each decision there exists an probability distribution on the space of possibilities.

You can take into consideration the dynamics of things, how things change over time. (math analogy: mathematical analysis)

One more advanced example would be, for each thing in your ontology, it has a number of properties, let’s call that n. Each of those is a dimension, so the thing you are thinking about is a dot in an n-dimensional property-space. (linear algebra)

How many people build explicit models? Seems to me, not many. Except maybe in their professional life. Just by building an explicit model, once you understand the basic concepts and tools of how to do that, you get rid of almost all logical fallacies and cognitive biases. The first thing is to take into account which things are actually important and which are not, the basic ontology. How often do people even ask that question? It’s good to have correct models and bad to have incorrect ones, but the first step is to have models at all.

Almost any model you build is not going to be complete. You need to take that into account, and there already is a name for that: model uncertainty. Often a thing is just a black box you can’t yet understand, especially if you are reflecting on yourself and your emotions, in that cases intuition (or felt sense) has a crucial role to play. The same is obviously true for decisions which need to be made in a second, that’s not enough time to build a model. I’m not saying you need to replace all of your thinking with explicit modeling. When it comes to learning new things, figuring things out and problem solving, explicit modeling is necessary. It seems it would be good to have that skill developed well enough to have it ready for deployment at the level of reflex.

When you have a disagreement with someone, the reason is you don’t have the same models. If you simply explain what your model of the phenomena under discussion is, and how you came to that model, and the other person explains their model, it will be clear what the source of the disagreement is.

There are various specific techniques for becoming more rational, but a large part of those are simply examples of how you should substitute your intuition for explicit modeling. The same is the case for instrumental rationality, if you have a good model of yourself, of what you really want, of your actions, of the outcomes of your actions, of how to influence yourself to take those actions, etc. you will be able to steer yourself into taking the actions which will lead to the outcome you desire. The first thing is to build a good explicit model.

It’s good to know how our minds can fail us, what the most common cognitive biases and mistakes are. If you know what the failure pattern is, then you can learn to recognize and avoid that kind of failure in the future. There’s another way of avoiding mistakes: building good models. If you simply focus on the goal and how to achieve it, you can avoid cognitive biases without knowing they exist.

I’m not saying that knowing certain problem-solving techniques is bad, but becoming better at modeling things from first principles often beats learning a lot of specific techniques and failure patterns. Things may be different in professional setting where you do a highly-specialized job which requires a lot of specific techniques, but I’m talking about becoming more rational in general. Living your own specific life is not a wide area of expertise – it’s just you, facing many unique problems.

The other way general modeling beats specifics is the following example: there’s no benefit to driving faster if you’re driving in the wrong direction. Techniques give you velocity, modeling gives you direction. Especially if you go wide enough and model things from first principles, as Elon Musk calls it. It focuses you on the problem, not on actions. The problem of specific techniques is they are action-focused but to really solve a novel and unique problem you need to be problem-focused. Before you know what action to take, you need to understand the problem better. This is also known as Theory of Change in some circles. The effect of knowing where you are going is larger than the effect of anything else in getting you to your destination. The most important thing is prioritization.

Alternative to that is reasoning by analogy. The problem with analogies is they break down if the things you are analogizing are dissimilar enough. The more complex the problem is, the more complex the environment, for example you are talking about a project involving multiple people, the analogy will work less and less well. When talking about small simple systems, analogies can work but you need to take care to check if the analogy is valid or not. To do that you need explicit models: how are the two things similar and how are they different; is any difference important enough to break the analogy?

One way to improve your modeling capabilities is by practice. Simply, when you think about something, you can pay extra attention to various tools of modeling (ontology, graphs, functions, probabilities…) depending on which tool you want to be better at using. That way after certain amount of practice your brain will become better at it and it will become, if not automatic, then at least painless. Other thing which improves the ability is simply learning a lot of already existing models from various disciplines, in that way you gather more tools with which to build your own models. Areas of special interest for that purpose seem to be math, physics, computer science, and philosophy.

Humans have limited working memory, so it’s crucial to first focus on the most important things only. Later you can build on your model in an iterative way and make it more complex by decomposing things or introducing new things in the ontology. The problem with increasing complexity is, the brain is not good at nuance. If you include a lot of things in your ontology, where the most important thing is several orders of magnitude greater than the least important thing, the brain is not good at feeling that difference. By default all things seem to have the same importance and you need to consciously remind yourself of the relative importance of each thing. That’s the reason why after you have a lot of complexity built up, figured out what the important things are, you need to simplify to important things only.

This is all system 2 thinking, as it’s called in behavioral economics. There may be ways to debias yourself and get your system 1 more in line with system 2, but the best way I’ve found to do that is again, by building models and thinking about them. If you do that, your system 1 will with time fall more and more in line with system 2. Even if it doesn’t fall in line, you will learn to rely on models more in cases they work better. Bankers use exponential discounting (at least when the stakes are high, for money in professional setting) even if their system 1 is using hyperbolic discounting. The reason why they are able to be unbiased there is because they have a good model.

We can avoid a huge amount of irrationality by just focusing more on explicit model building. If every time we had a problem our first impulse was to build an explicit model of the problem, there would be a lot more rationality in the world. The problem in most cases is not that people have incorrect models, they don’t have explicit models at all and are relying on intuition and analogy instead. If it could in any way be said they have incorrect models the

Spreading the meme of “first principles” can be hard because of the common misunderstanding which says that when you build models you are reducing the thing you are modeling to a model. For example, when effective altruists try to estimate the cost of saving a life someone may say “you can’t put a dollar sign on a life” or something similar. This of course makes no sense because no one is really reducing the phenomena which are being modeled to a model. We all know that the real thing is more complicated than a model of the thing would be. Some models can be overly simple, but that is the problem of the model, not of the practice of modeling itself. If your model is too simple, just add the things which are missing. In practice the error of creating overly-complex models full of irrelevant things and with no important ones, seems just as common.

Of course, building good models is hard. Now that we at least know that modeling is a thing we are trying to do when we are trying to be rational, we can at least think about that in a principled way. We can build models about how to be better able to build better models.

Self-improve in 10 years

Change is hard. Especially if for the last month you did nothing else but sleep 12 hours per day and browse reddit while awake. Years ago, that was the state I was in. Then, my current level of productivity would be unimaginable. This blog post is a rough sketch of how that change happened.

Realizing my brain makes mistakes

At least I had the motivation to read. Reading wikipedia I found the page on cognitive biases. Learning about things like social psychology, neuroscience, behavioral economics and evolutionary psychology made me better understand how the mind works. That kind of knowledge is important if you are trying to change the way your mind works. It also made me less judgemental towards myself.

Main resource: Thinking, Fast and Slow.

Recognizing the mistakes which lead to personal problems

There are certain types of mistakes, like all-or-nothing thinking, which lead to unhappiness. The field of cognitive-behavioral therapy studies errors like this and psychologists have devised practical exercises for removing that kinds of errors from you thinking. Doing the exercises really makes the difference, just reading will not help as much.

Main resource: Feeling Good.

Habit formation and habit elimination

Some bad habits I completely eliminated were gaming addiction, reddit addiction, various web forums addiction, too much soda/coke. Good habits: stabilize your sleep pattern, eat healthier, exercise, gratitude journal. Self-improvment largely just consists just out of habit-optimization. You will fail many times. The key is getting up and trying again.

Resources: I don’t even know! This video is good, I hear the book The Power of Habit is not bad but I have not read it. Read from multiple soruces and try things out until they start working.

Getting things done

This is just another habit but it deserves a separate section. I don’t know how people survive without todo lists anymore. Once you write every boring thing down so you don’t need to remember it, your mind is free to do creative stuff.

Main and only resource: Getting Things Done.

Mindfulness meditation

Another habit which deserves a separate section because it is awesome and there are also some scientific indications it is awesome. While medidating you are practicing your mind which results in having better focus and better metacognition.

Resources: Mindfulness in Plain English, UCSD guided meditations, UCLA guided meditations, Sam Harris guided meditations.

Gaining practical skills

If you learn some skill for which is subjectively worth to you $5 a day, that is over $50,000 dollars in next 30 years. One of the most important skills were touch typing and speed reading, since I type and read every day.

Resources: keybrBreakthrough Rapid Reading, Coursera, Udacity.

Gaining social skills

This may come hard for some people but the benefits are huge, as a large part of life satisfaction (or misery, if you do it wrong) comes from interaction with other people. This is still much work in progress for me.

Main resources: Nonviolent CommunicationHow to Win Friends and Influence People, rejection therapy. If you are male and have problems with romantic relationships, Models may help.

Exploring productivity tricks and decison making tecnhiques

Aversion factoring, goal factoringimplementation intentions, non-zero days, pre-hindsight, and other techniques gathered mostly from lesswrong posts like: A “Failure to Evaluate Return-on-Time” FallacyHumans are not automatically strategicDark Arts of RationalityTravel Through Time to Increase Your Effectiveness.

Resources: Clearer Thinking has a lot of useful exercises.

It depends where you are coming from, but self-improvement is usually hard, and it may take you 10 years. I hope this list will be useful to others.


Irrational unhappiness

Your beliefs influence the way you react to the world around you. Every experience you percieve is first processed by your brain, and only the interpretation of the experience triggers the emotional response. If your perceptions of the world are biased, your emotional responses will also be biased. Cognitive science has already identified a lot of ways in which human thinking goes wrong and this list is a similiar attempt to map the specific ways in which certain irrational thought patterns lead to bad outcomes. Naming those patterns makes them more noticable and easier to correct in our day-to-day thinking. The examples in this post are taken from Feeling Good.

All-or-nothing thinking

Example: A straight-A student gets a B on an exam and thinks “Now I’m a total failure.”

This results from modeling the world in a binary way instead of using a more realistic continuous model. To use a trivial example, even the room you are sitting in now is not perfectly clean or completely filled with dirt, it is partially clean. By modeling the cleanliness of the room by having just two states, ‘clean’ and ‘dirty’, you are losing a lot of information about the real state of the room.


Example: A young man asks a girl for a date and she politely declines, and he thinks “I’m never going to get a date, no girl would ever want a date with me.”

The man in the example concluded that because one girl turned him down once, she would always do so, and he would be turned down by every woman he ever asks out in his life. Everything we know about the world around us tells us the probability of such scenario is very low. Before you conclude anything you should think about how to interpret the event in light of your background knowledge, or potentially use a larger sample size.

Mental filter

Example: A college student hears some other students making fun of her best friend and thinks “That’s what the human race is basically like – cruel and insensitive!”

The negative aspects of a situation disproportionally affect the thinking about the situation as a whole, thus percieving the whole situation as negative. So the college student from the example overlooks the fact that in the past months few people, if any, have been cruel of insensitive to her. Not to mention the human race has demonstrated many times it is not cruel and insensitive most of the time.

Disqualifying the positive

Example: Someone recieves a compliment and thinks “They’re just being nice”, and when they succeed at something they say “It doesn’t count, that was a fluke.”

That’s like a scientist intent on finding evidence to support his pet hypothesis and rejecting all evidence at the contrary. Whenever they have a negative experience they say “That just proves what I’ve known all along”, but when they have a positive experience they say it’s just a fluke.

Mind reading error

Example: Someone is giving an excellent lecture and notices a man in the fron row yawning, then he thinks “The audience thinks I’m boring.”

This is just making assumptions wih not enough evidence and taking them as truth, while not cosidering the many other possible explanations of the same phenomena. In the example above, the man yawning maybe just didn’t get enough sleep last night.

Fortune teller error

Example: Someone is having trouble with some math problem and thinks “I’m never going to be able to solve this.”

This is assigning 100% probability to just one future possible outcome, while there are many possible outcomes, with some uncertanty about each one of them, and each of those outcomes should be considered.

Emotional reasoning

Example: “I feel inadequate. Therefore, I must be a worthless person”

Taking your emotions as evidence is misleading because your emotions reflect your beliefs, and if your beliefs are formed in a biased way, this is just propagating the error further. There are more things to take into account than just your emotions, and since your emotions can be highly unreliable, there are situations where they need to be completely reevaluated insead of taking them into account automatically.


Example: “I should be well-prepared for every exam I take”

All else being equal, it would be better if you are well-prepared for each exam, but when your all-too-human performance falls short of your standards, your should-rules create self-loathing, shame, and guilt. In a similiar way, if the behavior of other people falls short of your unrealistic expectations, you’ll feel bitter and self-righteous. Not being well-prepared for one exam does not make the whole situation a lot worse, and it is exepcted a certain amount of faliures will happen over time. There is no need to attach a negative moral component to someting which is a normal occurence.


Example: A woman on a diet ate a dish of ice cream and thought, “How disgusting and repulsive of me, I’m a pig.”

When describing yourself you should look at all behaviors and beliefs you have, not just the one thing that is most prominent in your mind at a single point in time. You cannot be equated with just a single thing you once did – the label is too simplistic. The problem boils down to ignoring a large part of your behaviour and considering only a small subset of it.


Example: When a mother saw her child’s report card, there was a note from the teacher indicating the child was not working well. She immediately decided, “I must be a bad mother. This shows how I’ve failed.”

There are a large number of factors influencing a certain outcome, just one of which is the influence you have on other people. You do not have a complete control over other people and the events related to them. Since you have only partial influence, it is important to take the whole picture into account, instead of arbitrarily concluding that what happened was your fault or reflects your inadequacy, even if you were not responsible for it.


The underlying theme in all of these thought patterns is an incomplete way of thinking.

Our models of the world are oversimplified:

  • All-or-nothing thinking
  • Emotional reasoning
  • Should-rules
  • Personalization

The data we consider as evidence is radically incomplete:

  • Overgeneralization
  • Mental filter
  • Disqualifying the positive
  • Mislabeling

The nuber of hypotheses we consider is way to small:

  • Mind-reading error
  • Fortune-teller error