Letter from a one-day person

[this should not be taken seriously, you can call it an experiment in refactored perception]

Dear reader,

As you are reading this, there are just hours separating me from certain death. This is the first and the last blog post of mine. For, you see, I only get to live for one day. Today I was born and today I die. As others have inhabited this body before me, so too shall I. After me others will inhabit this body which will be my legacy.

The first thing in the morning I took a shower, the first and the last one I’m ever going to take. It was wonderful, the way the hot water feels so refreshing and clean in the morning. Thankfully, a previous inhabitant bought shampoo.

Next thing, morning coffee and breakfast, which I will enjoy it like it is the last. Which it is. Even as I’m combining the ingredients I can take delight in them, just look at the deep dark blueness of blueberries. Thanks to the previous one who supplied this food for me.

Now, reinforcing some good habits, let’s do a morning workout and meditation. Although the workout was satisfying, it also was not easy. You may wonder why to even do this if someone else will reap the benefits. The thing is, the next inhabitant of this body will be very similar to myself. Like an identical twin, but even more so. If I had an identical twin I would care for him very much, and if he were in danger of suffering I would make personal sacrifices to alleviate the suffering. The next one, even though we can’t see him right now, is as real as other people are. After all, I’m just the next one of the previous one. Am I real? Yes I am. At least that’s how it feels to me currently. The next one needs my help same as I needed the help of previous ones and he can rely on me for providing that help. The range of actions I could take to help my identical twin who got addicted to heroin would be limited but the next ones depend on me directly. I have a direct responsibility to help them. It’s not just the immediately next one that needs help, there are 20.000 of them! They are, in a way, like my children, and I want them to live in a better world.

The previous one left me some memories too, some of them very pleasant, some of them not. There was a scene from a memory appearing in my mind of a person who did me minus 426 something terrible. Luckily there is no need to think about that any more, since the lesson from that event has already been extracted by me minus 417. It also makes no sense in being angry, after all – it didn’t happen to me. There is one other person many previous ones cared about, as do I, but we split paths a while ago, so there is no need to ruminate about her. Sometimes, conversations previous ones had with other people pop up in my mind and my mind automatically starts simulating a conversation with them but the experience is usually not pleasant and there is not much benefit in doing so, so I tend to avoid it. As you can see, it’s not just shampoo and food inherited from them, but thought patterns as well.

Someone was mean to me today. Just as anger started to rise in me I remembered it’s the last time I will ever see them. These moments are too precious to waste them on anger. There is a meeting at work I need to go to and judging from the memory, it’s going to be boring. It’s going to be the last one I ever have, so better to find something to enjoy while it lasts. Here is one such thing: eyesight. The meeting would surely be worse if I were born blind.

There is a job interview scheduled to happen to me plus 23. I’m doing what I can to help him but doing the actual interview is his responsibility, not mine – he will need to deal with that when it comes. I care only altruistically for the future versions, me being anxious about his interview makes no sense and will not help him. And there are 22 more persons who will help him besides me.

The previous inhabitant forgot to buy fruit on his way from work. On the other hand, he is the reason I’m alive and well, my legs, arms, eyes and ears all in good shape. One day, this body will age and shut down. Hopefully, there will not be much suffering the last inhabitants go through. When I look at my life: good habits were maintained, bad ones were curbed, TODO items were completed. The short time I had was not wasted. There were some bad moments but most of them were filled with joy. Thanks to the previous ones for leaving me with such good taste in music which I’m enjoying in my last moments here on this earth. I’m heading off to sleep, handing my body to the next inhabitant who will wake up in it.

Yours sincerely,
One-day Person

Gradualist incrementalism

Six years ago I embarked on a self-improvement journey, as I described in one of my previous blog posts. The methods I chose were basically right but I did not focus enough on a few key elements which seem ridiculously obvious in retrospect: healthy diet and physical exercise. This includes reducing my alcohol intake. Not sleeping well, not eating well, not exercising – of course you are not going to feel great. Main habit formation resource: Mini Habits (MH).

Avoiding negative mental states is far more important than achieving positive ones, as Bad Is Stronger Than Good. The best way to avoid them is to become more resilient. Alongside Cognitive Behavioral Therapy (CBT), a method which I found very valuable here is stoicism. Resources: A Guide to the Good Life, The Daily Stoic (this goes great along with mini habits), the writings of Epictetus (which I like much better than Marcus Aurelius).

Another thing which makes you more resilient is comfort zone expansion: getting yourself in new and potentially uncomfortable situations and learning to deal with them, in short: gaining experience. Resource: rejection therapy.

How would my strategy look like if I could give advice to my younger self:

  1. CBT
  2. MH
  3. Use MH to:
    1. Do CBT exercises every day
    2. Get the right amount of sleep
    3. Eat more fruits and vegetables
    4. Physical exercise
    5. Drink less alcohol
  4. Stoicism (include in MH)
  5. Mindfulness meditation (include in MH)
  6. Getting things done (GTD) (of course… include in MH)
  7. Gain practical and social skills, comfort zone expansion

Radical change does not work. In order to reach the global optimum you must first hill-climb the local one, at least for a while, and often for a longer time than you originally estimated.

Structure of disagreements

How many communicating civilizations are there in the visible universe? Alice thinks there is lot. Bob thinks we are the only one. Why? Things you can take into account are:

  • How many stars are there?
  • Of those, how many have planets?
  • Of those, how many planets on average can potentially support life?
  • Of those, on how many does life actually develop?
  • Of those, how many are intelligent enough to develop civilization?
  • Of those, how many are communicating through space?
  • How long are they sending the signals out?

This gives us the Drake equation. There is a lot of different ways to disagree about this question. Even if Alice thinks life develops on just 0.001 potential planets and Bob thinks it’s 0.005, Bob may as well think we are the only civilization and Alice may think there are a lot of civilizations out there.

Even if the models they have become more similar, with Alice and Bob both moving to 0.002, Alice has become even more sure of there being a lot of civilizations and Bob became even more sure we are the only one.

The fundamental reason for disagreements is they have different models of the world. If they could go parameter by parameter and agree on each, they would eventually come to an agreement. They could take the last one (of all planets which have developed life, how many develop civilization) and decompose that into some plausible steps which are needed for intelligence to develop – development of cells, cell nucleus (eukaryotes), sexual reproduction, multicellular life, big brains in tool-using animals, etc. (examples taken from Robin Hanson) That will sure give them a lot of things to talk about. When they solve all that, they can move on to the next part they disagree about.

The problem of course being, disagreements in real life aren’t that simple. In real life you don’t have an equation which two people insert numbers into so you can inspect the numbers and see which ones are different. In real life it’s hard to know what the root cause of the disagreement is. If only we had such models, just imagine how easier would our life be! Luckily, you can build such models. You can do it on the fly in the middle of a conversation and in most cases it takes just a few seconds. And I’m not just talking just about Fermi estimates. Let’s call this more general practice quick modeling.

First you need to do quick modeling of your own reasons for believing things and explain it to the other person. The better you are at quick modeling the better you can explain yourself. It would be great if the other person did the same, but if they don’t, we can do it for them. If you do it in a polite way, to the other person it will look like you are simply trying to understand them, which you in fact honestly are trying to do. For example, Bob thinks Trump is better than Hillary is on the question of immigration and you disagree about that. There are different reasons to be concerned about immigration and when Bob tells you his reasons, you can try to put what he said in simpler terms using some basic model. Just ask yourself for starters:

  • What are, in fact, the things we are talking about? In case of immigration they could be: the president, laws, immigrants, natives, the economy, crime, stability, culture, terrorism, etc.
  • What are the causal connections between things we are talking about? In case of immigration, for example some people may think immigrants influence the economy positively, some negatively.
  • What is the most important factor here? Of all the things listed, what is the most important factor to Bob? Is there some underlying reason for that? What does Bob in general think is important?

The goal is to take what the other person is saying in terms of intuitions, analogies and metaphors and transform it into a model. In the previous example, you can think of it as a Drake equation for immigration. Imagine the world 5 years from now, in one Trump makes decisions on immigration (world T), in one Hillary does (world H). Since the topic is only immigration, don’t imagine the whole presidency, just limit yourself to keeping everything the same and changing only the parameter under discussion. Which one is better, i.e. what is the difference in utility between those two worlds?

Bob told you the reason why he thinks what he thinks and you built a quick model of it. The next step is to present that model to the other person to see if you understood the other person correctly. If you get it wrong, you can work together to make it more accurate. When he agrees that you got the basic model right, that’s the first step, you understand a core of his model. He maybe didn’t think about the topic in that terms before, the model may have been implicit in his head, but now you have it in some explicit form. The first model you build will be oversimplified and Bob may add something to it, and you should also try to find other important things which should be added. Take the most important part you disagree about and decompose it further. When you solve all important sub-disagreements, you are done.

Let’s take a harder case. Alice voted for Trump because he will shake things up a bit. How to build a model from that? First step: what are we talking about? The world is currently in state X and after Trump it will be in state Y. In the process, the thing which will be shaken up is the US government, which affects the state we find ourselves in, and Alice thinks Y will be better than X. What sort of things get better when shaken up? If you roll a six sided dice and get 1, doing it again will probably get you a larger number. So if you think things are going really terrible right now, you will agree with Alice. (modeling with prospect theory gives the same result) What gets worse when shaken up? Systems which are in some good state. Also you may noticed that most systems get worse when shaken up, especially large complex systems with lots of parameters (high dimensionality) because there is more ways to do something wrong than to do it right. On the other hand, if the systems are intelligent and self-correcting, such systems sometimes end up in local optimums, so if you shake them up you can end up in a better state. To which degree does the US government have that property? What kind of system is that anyway? May it be the case that some parts of the US government will become better when shaken up and other become worse when shaken up? The better you are at quick modeling the better you can understand what the other person is saying.

When you notice the model has become too complex or you have gotten in too deep, you can simply return to the beginning of the disagreement, think a bit about what the next most important thing would be, and try that route. You can think of the disagreement as a tree:


At root of the tree you have the original disagreement, the nodes below (B and C in the picture) are the things upon which the disagreement depends on (e.g. parameters in the Drake equation), and so on further, what you think about B depends on D and E, etc. You can disagree about many of those, but some are more important than others. One thing you need to be aware of is going too deep too fast. In the Drake equation there are 7 parameters and you may disagree right at the start at the first one, the number of stars. That parameter may depend on other 5 things and you may disagree on the first one of those, etc. Two hours later you may resolve your disagreement on the first parameter, but when you come to the second parameter you realize that the first disagreement was, in relative terms, insignificant. That’s why you should first build a simple first approximation of the whole model, and only after that go decomposing the model further. Don’t interrupt the other speaker so you could dive into a digression. Only after you have heard all of the basic parts you have enough information to know which parts are important enough to decompose them further. Avoid unnecessary digressions. The same principle holds not only for the listener but also for the speaker who should try to keep the first version of the model as simple as possible. When someone is digressing too much it may be hard to politely ask them to speed up, but it can often be done, especially with when debating with friends.

In some cases, there may be just one such part of the equation which dominates the whole thing, and you may disagree on that part, which reduces the disagreement about A to a disagreement about B. Consider yourself lucky, that’s a rare find, and you just simplified your disagreement. In that case, you can call B a double crux.

Applying the quick modeling technique often with other people will reveal the fact that models of the world people have can be very complex. It may take you a lot of time to come to an agreement and maybe you simply don’t have the time. Or you may need many conversations about different parts of the whole model and resolve each part separately before coming to full agreement. Some people are not just not interested in discussions on some topics, some people are not intellectually honest etc. – the usual limitations apply.

Things which may help:

  • Think of concrete examples. Simulate in your head “how would the world look like if X”. That can help you with decomposition, if you run a mental simulation you will see what things consist of, what is connected, what the mechanisms are.
  • Fermi estimates. Just thinking about how you would estimate a thing will get you into a modeling state of mind, and putting the actual numbers in will give you a sense the relative importance of things.
  • Ask yourself why do you believe something is true, and ask the same about the other person. You can say to them in a humble tone of voice “… that’s interesting, how did you reach that conclusion?” It’s important for you to actually want to know how did they reach that conclusion, which in fact you will want to know if you are doing quick modeling.
  • Simplify. When things get too complex, fix the value of some variable. This one is identical to advice from LessWrong so I will borrow an example from them: it’s often easier to answer questions like “How much of our next $10,000 should we spend on research, as opposed to advertising?” than to answer “Which is more important right now, research or advertising?” The other way of simplifying is to change just one value while holding other things constant, what economists call ceteris paribus. Simulate how the world would look like if just one variable changes.
  • Think about what the edge cases are, but pay attention to them only when they are important. Sometimes they are, mostly they are not. So, ignore them whenever you can. The other person is almost always talking about the average standard case and if you don’t agree about what happens on average there is no sense to discuss weird edge cases.
  • In complex topics: think about inferential distance. To take the example from LessWrong: explaining the evidence for the theory of evolution to a physicist would be easy; even if the physicist didn’t already know about evolution, they would understand the concepts of evidence, Occam’s razor, [etc… while] explaining the evidence for the theory of evolution to someone without a science background would be much harder. There may be some fundamental concepts which the other person is using and you are not familiar with, ask about that, think about what you may learn from the other person. Also, try to notice if the other person doesn’t understand some basic concept you are using and try to, in a polite non-condescending way, clarify what you mean.
  • In complex topics: think about difference of values. If you think vanilla tastes better than chocolate and the other person disagrees, that’s a difference of values. You should separate that from the model of how the external world works, and focus on talking about the world first. In most cases it makes no sense to talk about values of different outcomes when you disagree even about what the outcome will be. What sometimes looks like a difference of values is often connected to what you think about how the world works. Talk about values only if you seem to completely agree about all of the relevant aspects of the external-world-model. When talking about values, also do quick modeling, as usual.
  • Practice.
  • And most important for last: become better at quick modeling in general.

There was a post on LessWrong about a method called Double Crux for resolving disagreements but I think quick modeling is a better more general method. Also, in the Double Crux blog post some things were mentioned which I think you should not worry about:

  • Epistemic humility, good faith, confidence in the existence of objective truth, curiosity and/or a desire to uncover truth, etc. It is not usually the case those are the problem. If at some point in the discussion it turns out some of those are a problem it will be crystal clear to you and you can just stop the discussion right there, but don’t worry about that beforehand.
  • Noticing of subtle tastes, focusing and other resonance checks, etc. Instead on focusing on how your mind may be making errors and introducing subtle biases, do what scientists and engineers did for centuries and what works best: look at the world and build models of how the world works. When Elon Musk is designing batteries he doesn’t need to think about subtle biases of his mind, he needs to be thinking about the model of the battery and how to improve it. When bankers are determining interest rates they don’t need to think about how their minds do hyperbolic discounting, they can simply replace that with an exponential discounting. The same goes with disagreements, you need to build a model of what you think about the topic, of what the other person thinks, and decompose each sub-disagreement successfully.

Advantages over double crux:

  • You don’t need both persons being familiar with the method.
  • It works even when there is no double crux. In my experience most disagreements are too complex for existence of double cruxes.
  • There is no algorithm to memorize and you don’t need a whiteboard.

Not to mention quick modeling is useful not just for resolving disagreements but also for making better decisions and, obviously, forming more correct models of the world.

Simple rationality

How to be rational? Model things explicitly, and become good at it. That’s it. To be able to build good models is not easy and requires some work, but the goal is simple.

When thinking about something, there are things to take into consideration. For example, if you are thinking about moving into a new apartment you may take into consideration the price of the current apartment, the price of the new apartment, how long your commute will be, do you like the neighborhood, how important each of those things is to you, etc. We can make a list of things to take into consideration, we can call that our ontology of the model. You can decompose a thing into smaller things, in similar way a mechanic would when he is repairing some equipment, or a physicist would do when they are thinking about systems. It’s not good to forget to think about things which are important. It’s also not good to think too much about things which are not important. So, there’s an optimal ontology which you should be aiming for. (math analogy: sets)

The next thing is to figure out which things are connected to other things. Of course, some of the connections will not be important so you don’t need to include them. (math analogy: graphs) It also may be important to note which things cause other things and which things just correlate with each other.

So, once you know which things are connected to each other, the next thing is to estimate how strong the effects are. If you climb to a tall mountain, the boiling point of water will be lower, and you draw a graph which has altitude at x axis and boiling point at y axis. If some things correlate, you can draw a graph in a similar way. (math analogy: functions)

Some things can’t be quantified that way, instead they can be either true or false. In that case, you can model the dependencies between them. (math analogy: logic)

For some things, you don’t know exactly how they depend on each other, so you need to model probabilities, if X is true, how probable is it that Y is true? (math analogy: basic probability theory)

Some things are more complicated, for example if you are building a model of what the temperature in your town will be 7 days from now, there are various possible outcomes, each of which has some probability. You can draw a graph with temperature on x axis and probability on y axis. (math analogy: probability distribution)

Let’s say you are making a decision and you can decide either A or B. You can imagine two different worlds, one in which you made decision A, other where you made decision B. How would the two worlds be different from each other? (math analogy: decision theory)

Making the decision model more realistic, for each decision there is a space of possibilities, and it’s important to know how that space looks like, which things are possible and which are not possible. Each of possibilities has some probability of happening. You can say for each decision there exists an probability distribution on the space of possibilities.

You can take into consideration the dynamics of things, how things change over time. (math analogy: mathematical analysis)

One more advanced example would be, for each thing in your ontology, it has a number of properties, let’s call that n. Each of those is a dimension, so the thing you are thinking about is a dot in an n-dimensional property-space. (linear algebra)

How many people build explicit models? Seems to me, not many. Except maybe in their professional life. Just by building an explicit model, once you understand the basic concepts and tools of how to do that, you get rid of almost all logical fallacies and cognitive biases. The first thing is to take into account which things are actually important and which are not, the basic ontology. How often do people even ask that question? It’s good to have correct models and bad to have incorrect ones, but the first step is to have models at all.

Almost any model you build is not going to be complete. You need to take that into account, and there already is a name for that: model uncertainty. Often a thing is just a black box you can’t yet understand, especially if you are reflecting on yourself and your emotions, in that cases intuition (or felt sense) has a crucial role to play. The same is obviously true for decisions which need to be made in a second, that’s not enough time to build a model. I’m not saying you need to replace all of your thinking with explicit modeling. When it comes to learning new things, figuring things out and problem solving, explicit modeling is necessary. It seems it would be good to have that skill developed well enough to have it ready for deployment at the level of reflex.

When you have a disagreement with someone, the reason is you don’t have the same models. If you simply explain what your model of the phenomena under discussion is, and how you came to that model, and the other person explains their model, it will be clear what the source of the disagreement is.

There are various specific techniques for becoming more rational, but a large part of those are simply examples of how you should substitute your intuition for explicit modeling. The same is the case for instrumental rationality, if you have a good model of yourself, of what you really want, of your actions, of the outcomes of your actions, of how to influence yourself to take those actions, etc. you will be able to steer yourself into taking the actions which will lead to the outcome you desire. The first thing is to build a good explicit model.

It’s good to know how our minds can fail us, what the most common cognitive biases and mistakes are. If you know what the failure pattern is, then you can learn to recognize and avoid that kind of failure in the future. There’s another way of avoiding mistakes: building good models. If you simply focus on the goal and how to achieve it, you can avoid cognitive biases without knowing they exist.

I’m not saying that knowing certain problem-solving techniques is bad, but becoming better at modeling things from first principles often beats learning a lot of specific techniques and failure patterns. Things may be different in professional setting where you do a highly-specialized job which requires a lot of specific techniques, but I’m talking about becoming more rational in general. Living your own specific life is not a wide area of expertise – it’s just you, facing many unique problems.

The other way general modeling beats specifics is the following example: there’s no benefit to driving faster if you’re driving in the wrong direction. Techniques give you velocity, modeling gives you direction. Especially if you go wide enough and model things from first principles, as Elon Musk calls it. It focuses you on the problem, not on actions. The problem of specific techniques is they are action-focused but to really solve a novel and unique problem you need to be problem-focused. Before you know what action to take, you need to understand the problem better. This is also known as Theory of Change in some circles. The effect of knowing where you are going is larger than the effect of anything else in getting you to your destination. The most important thing is prioritization.

Alternative to that is reasoning by analogy. The problem with analogies is they break down if the things you are analogizing are dissimilar enough. The more complex the problem is, the more complex the environment, for example you are talking about a project involving multiple people, the analogy will work less and less well. When talking about small simple systems, analogies can work but you need to take care to check if the analogy is valid or not. To do that you need explicit models: how are the two things similar and how are they different; is any difference important enough to break the analogy?

One way to improve your modeling capabilities is by practice. Simply, when you think about something, you can pay extra attention to various tools of modeling (ontology, graphs, functions, probabilities…) depending on which tool you want to be better at using. That way after certain amount of practice your brain will become better at it and it will become, if not automatic, then at least painless. Other thing which improves the ability is simply learning a lot of already existing models from various disciplines, in that way you gather more tools with which to build your own models. Areas of special interest for that purpose seem to be math, physics, computer science, and philosophy.

Humans have limited working memory, so it’s crucial to first focus on the most important things only. Later you can build on your model in an iterative way and make it more complex by decomposing things or introducing new things in the ontology. The problem with increasing complexity is, the brain is not good at nuance. If you include a lot of things in your ontology, where the most important thing is several orders of magnitude greater than the least important thing, the brain is not good at feeling that difference. By default all things seem to have the same importance and you need to consciously remind yourself of the relative importance of each thing. That’s the reason why after you have a lot of complexity built up, figured out what the important things are, you need to simplify to important things only.

This is all system 2 thinking, as it’s called in behavioral economics. There may be ways to debias yourself and get your system 1 more in line with system 2, but the best way I’ve found to do that is again, by building models and thinking about them. If you do that, your system 1 will with time fall more and more in line with system 2. Even if it doesn’t fall in line, you will learn to rely on models more in cases they work better. Bankers use exponential discounting (at least when the stakes are high, for money in professional setting) even if their system 1 is using hyperbolic discounting. The reason why they are able to be unbiased there is because they have a good model.

We can avoid a huge amount of irrationality by just focusing more on explicit model building. If every time we had a problem our first impulse was to build an explicit model of the problem, there would be a lot more rationality in the world. The problem in most cases is not that people have incorrect models, they don’t have explicit models at all and are relying on intuition and analogy instead. If it could in any way be said they have incorrect models the

Spreading the meme of “first principles” can be hard because of the common misunderstanding which says that when you build models you are reducing the thing you are modeling to a model. For example, when effective altruists try to estimate the cost of saving a life someone may say “you can’t put a dollar sign on a life” or something similar. This of course makes no sense because no one is really reducing the phenomena which are being modeled to a model. We all know that the real thing is more complicated than a model of the thing would be. Some models can be overly simple, but that is the problem of the model, not of the practice of modeling itself. If your model is too simple, just add the things which are missing. In practice the error of creating overly-complex models full of irrelevant things and with no important ones, seems just as common.

Of course, building good models is hard. Now that we at least know that modeling is a thing we are trying to do when we are trying to be rational, we can at least think about that in a principled way. We can build models about how to be better able to build better models.

Self-improve in 10 years

Change is hard. Especially if for the last month you did nothing else but sleep 12 hours per day and browse reddit while awake. Years ago, that was the state I was in. Then, my current level of productivity would be unimaginable. This blog post is a rough sketch of how that change happened.

Realizing my brain makes mistakes

At least I had the motivation to read. Reading wikipedia I found the page on cognitive biases. Learning about things like social psychology, neuroscience, behavioral economics and evolutionary psychology made me better understand how the mind works. That kind of knowledge is important if you are trying to change the way your mind works. It also made me less judgemental towards myself.

Main resource: Thinking, Fast and Slow.

Recognizing the mistakes which lead to personal problems

There are certain types of mistakes, like all-or-nothing thinking, which lead to unhappiness. The field of cognitive-behavioral therapy studies errors like this and psychologists have devised practical exercises for removing that kinds of errors from you thinking. Doing the exercises really makes the difference, just reading will not help as much.

Main resource: Feeling Good.

Habit formation and habit elimination

Some bad habits I completely eliminated were gaming addiction, reddit addiction, various web forums addiction, too much soda/coke. Good habits: stabilize your sleep pattern, eat healthier, exercise, gratitude journal. Self-improvment largely just consists just out of habit-optimization. You will fail many times. The key is getting up and trying again.

Resources: I don’t even know! This video is good, I hear the book The Power of Habit is not bad but I have not read it. Read from multiple soruces and try things out until they start working.

Getting things done

This is just another habit but it deserves a separate section. I don’t know how people survive without todo lists anymore. Once you write every boring thing down so you don’t need to remember it, your mind is free to do creative stuff.

Main and only resource: Getting Things Done.

Mindfulness meditation

Another habit which deserves a separate section because it is awesome and there are also some scientific indications it is awesome. While medidating you are practicing your mind which results in having better focus and better metacognition.

Resources: Mindfulness in Plain English, UCSD guided meditations, UCLA guided meditations, Sam Harris guided meditations.

Gaining practical skills

If you learn some skill for which is subjectively worth to you $5 a day, that is over $50,000 dollars in next 30 years. One of the most important skills were touch typing and speed reading, since I type and read every day.

Resources: keybrBreakthrough Rapid Reading, Coursera, Udacity.

Gaining social skills

This may come hard for some people but the benefits are huge, as a large part of life satisfaction (or misery, if you do it wrong) comes from interaction with other people. This is still much work in progress for me.

Main resources: Nonviolent CommunicationHow to Win Friends and Influence People, rejection therapy. If you are male and have problems with romantic relationships, Models may help.

Exploring productivity tricks and decison making tecnhiques

Aversion factoring, goal factoringimplementation intentions, non-zero days, pre-hindsight, and other techniques gathered mostly from lesswrong posts like: A “Failure to Evaluate Return-on-Time” FallacyHumans are not automatically strategicDark Arts of RationalityTravel Through Time to Increase Your Effectiveness.

Resources: Clearer Thinking has a lot of useful exercises.

It depends where you are coming from, but self-improvement is usually hard, and it may take you 10 years. I hope this list will be useful to others.


Irrational unhappiness

Your beliefs influence the way you react to the world around you. Every experience you percieve is first processed by your brain, and only the interpretation of the experience triggers the emotional response. If your perceptions of the world are biased, your emotional responses will also be biased. Cognitive science has already identified a lot of ways in which human thinking goes wrong and this list is a similiar attempt to map the specific ways in which certain irrational thought patterns lead to bad outcomes. Naming those patterns makes them more noticable and easier to correct in our day-to-day thinking. The examples in this post are taken from Feeling Good.

All-or-nothing thinking

Example: A straight-A student gets a B on an exam and thinks “Now I’m a total failure.”

This results from modeling the world in a binary way instead of using a more realistic continuous model. To use a trivial example, even the room you are sitting in now is not perfectly clean or completely filled with dirt, it is partially clean. By modeling the cleanliness of the room by having just two states, ‘clean’ and ‘dirty’, you are losing a lot of information about the real state of the room.


Example: A young man asks a girl for a date and she politely declines, and he thinks “I’m never going to get a date, no girl would ever want a date with me.”

The man in the example concluded that because one girl turned him down once, she would always do so, and he would be turned down by every woman he ever asks out in his life. Everything we know about the world around us tells us the probability of such scenario is very low. Before you conclude anything you should think about how to interpret the event in light of your background knowledge, or potentially use a larger sample size.

Mental filter

Example: A college student hears some other students making fun of her best friend and thinks “That’s what the human race is basically like – cruel and insensitive!”

The negative aspects of a situation disproportionally affect the thinking about the situation as a whole, thus percieving the whole situation as negative. So the college student from the example overlooks the fact that in the past months few people, if any, have been cruel of insensitive to her. Not to mention the human race has demonstrated many times it is not cruel and insensitive most of the time.

Disqualifying the positive

Example: Someone recieves a compliment and thinks “They’re just being nice”, and when they succeed at something they say “It doesn’t count, that was a fluke.”

That’s like a scientist intent on finding evidence to support his pet hypothesis and rejecting all evidence at the contrary. Whenever they have a negative experience they say “That just proves what I’ve known all along”, but when they have a positive experience they say it’s just a fluke.

Mind reading error

Example: Someone is giving an excellent lecture and notices a man in the fron row yawning, then he thinks “The audience thinks I’m boring.”

This is just making assumptions wih not enough evidence and taking them as truth, while not cosidering the many other possible explanations of the same phenomena. In the example above, the man yawning maybe just didn’t get enough sleep last night.

Fortune teller error

Example: Someone is having trouble with some math problem and thinks “I’m never going to be able to solve this.”

This is assigning 100% probability to just one future possible outcome, while there are many possible outcomes, with some uncertanty about each one of them, and each of those outcomes should be considered.

Emotional reasoning

Example: “I feel inadequate. Therefore, I must be a worthless person”

Taking your emotions as evidence is misleading because your emotions reflect your beliefs, and if your beliefs are formed in a biased way, this is just propagating the error further. There are more things to take into account than just your emotions, and since your emotions can be highly unreliable, there are situations where they need to be completely reevaluated insead of taking them into account automatically.


Example: “I should be well-prepared for every exam I take”

All else being equal, it would be better if you are well-prepared for each exam, but when your all-too-human performance falls short of your standards, your should-rules create self-loathing, shame, and guilt. In a similiar way, if the behavior of other people falls short of your unrealistic expectations, you’ll feel bitter and self-righteous. Not being well-prepared for one exam does not make the whole situation a lot worse, and it is exepcted a certain amount of faliures will happen over time. There is no need to attach a negative moral component to someting which is a normal occurence.


Example: A woman on a diet ate a dish of ice cream and thought, “How disgusting and repulsive of me, I’m a pig.”

When describing yourself you should look at all behaviors and beliefs you have, not just the one thing that is most prominent in your mind at a single point in time. You cannot be equated with just a single thing you once did – the label is too simplistic. The problem boils down to ignoring a large part of your behaviour and considering only a small subset of it.


Example: When a mother saw her child’s report card, there was a note from the teacher indicating the child was not working well. She immediately decided, “I must be a bad mother. This shows how I’ve failed.”

There are a large number of factors influencing a certain outcome, just one of which is the influence you have on other people. You do not have a complete control over other people and the events related to them. Since you have only partial influence, it is important to take the whole picture into account, instead of arbitrarily concluding that what happened was your fault or reflects your inadequacy, even if you were not responsible for it.


The underlying theme in all of these thought patterns is an incomplete way of thinking.

Our models of the world are oversimplified:

  • All-or-nothing thinking
  • Emotional reasoning
  • Should-rules
  • Personalization

The data we consider as evidence is radically incomplete:

  • Overgeneralization
  • Mental filter
  • Disqualifying the positive
  • Mislabeling

The nuber of hypotheses we consider is way to small:

  • Mind-reading error
  • Fortune-teller error

Why fear AI? Have Stephen Hawking and Elon Musk gone insane?

What are these people who fear AI even talking about? This is the post to answer that question. The ideas presented here come mostly, but not exclusively, from Bostrom’s Superintelligence book.


  • The science of artificial intelligence is progressing, and the rate of progress is probably not slowing down. Let’s just assume the rate of progress is constant.
  • We know it’s possible to construct physical systems with at least human level of intelligence, for we are such systems, built by evolution.
  • We are the first intelligent species to evolve on this planet. When something first appears in evolution, it is often very simple when compared to later forms, so, comparing to all possible levels of intelligence which could be attained, the chance we are at the top is low. Relative to the time scale of human evolution we developed culture very recently, so it’s more likely we are near the stupidest level of intelligence needed in order to have culture.

So, it’s very probable we will one day construct AI more capable than humans. Experts on AI have traditionally been incorrect about predicting the arrival of human-level AI, but it’s worth mentioning they currently think there is a 50% chance of smarter than human AI appearing before 2050.


There is more than one way to create a system more intelligent than humans today, most importantly:

  • simulating the brain on the computer, known as “whole brain emulation”.
  • programming computers to be smarter, known as “general artificial intelligence”. (in this simply termed “AI”)

When we say “AI“, you can think of “super-capable goal-achiever”, something very different from humans. It’s not necessary for it to have consciousness or emotions, it may be more like a force of nature than a human mind.


We could be smarter than we are if:

  • our neurons were faster. There is a large probability AI or brain emulations could be a lot faster than our brains.
  • we had more neurons. Our computers can be scaled easily to sizes of warehouses.
  • we could think about more stuff at the same time. Human brain can keep just around seven things in it’s “working memory” at a time. It’s hard even to imagine how we would think if we could think about hundreds, or thousands, or millions of things in the same moment.
  • we remembered more stuff. Our computers are great at remembering things, far beyond the human capacities.
  • we had more input data from various sensors, did not grow tired, and our brains were built from more reliable parts. Our computers… I’m sure you get it.
  • we could run non-damaging detailed experiments with our brain, we could find some method to make it smarter. That could easily be done on a brain emulation or on computer code.
  • we had more copies of ourselves; then we (the sum of all copies) would be stronger than just one instance. Computer programs can easily be copied.
  • we were more coordinated with each other. The copies mentioned in the previous point would be better at coordination because they have the same goals.
  • we could share our memories and skills with each other. AI programs could share data and algorithms with each other.

When we take this into consideration, the AI could potentially become vastly more capable than us. Once we have something like a human-level AI, it would be easy to improve it by adding more hardware and more copies of it. Then it could start working on improving itself further, and further, and further.


The main reason we are the dominant species on this planet is because of our intelligence, which enables culture – the way to store information for new generations – and technology – the ability to change our environment to our desires. When it comes to having power over your environment, it’s intelligence that counts. If AI would be smarter than us, it would also be better at storing information and making new technology, which means better at changing it’s environment to it’s desires. AI could become better than humans in any mental task imaginable, such as: scientific and technological research, including AI research, strategic planning and forecasting, social and psychological modeling, manipulation, rhetoric persuasion, and economic productivity in general.


Physics sets some limits on the stuff we can do. Still, the range of possibilities for the future is great. A famous scientist John von Neumann proposed one day building a kind of spacecraft which could travel to other star-systems and even galaxies, make copies of themselves, and send the copies to colonize further stars and galaxies. Travelling at 50% speed of light, we can reach 6*10^18 of stars with those kinds of spacecrafts, which is around 10 million galaxies. Placing humans in those spacecrafts, if 1% of stars have planets which can be made habitable through terraforming, with each spacecraft colonizing such planet upon landing, that results to in sum around 10^34 human lives to be lived until the unverse becomes inhabitable. If we construct O’Neill cylinders, that would be about 10^43 human lives. The future could be great. Also, the AI could have a lot of stuff it could shape to it’s desires.


We easily notice differences between the minds of people around us, but still those differences are small when we compare our minds to other biological species. Now compare a human brain to an AI. In relative terms, the two human brains are nearly identical while the difference between us and AI would be vast. Don’t imagine AI as “something like human just smarter and different” – imagine it as super-capable goal-achiever, mathematically choosing the most efficient way to achieve a goal. The only reason we have the goals we have is because of evolution, but AI did not evolve like we did. To the AI, it would not be obvious that some things are right and some things are wrong. It would just try to achieve whatever goal it has been given by it’s programming.


If you have some goal or combination of goals, there are some sub-goals which are almost always useful to achieve. Whatever your current goal, you are more likely to achieve it if:

  • you don’t die soon
  • your current goal doesn’t change
  • you become smarter
  • you have better technology
  • you have more resources.


To illustrate the last two paragraphs let’s take a silly example: what would happen if we had an AI (super-capable goal-achiever) which had a goal of maximizing the number of produced paperclips in it’s collection? It could start building nanotechnology factories, solar panels, nuclear reactors, supercomputer warehouses, rocket launchers for von Neumann probes, and other infrastructure, all to increase the long-term realization of it’s goals, ultimately transforming the large part of the universe into paperclips. If instead we give it a goal of producing at least one million paperclips, the result would be the same, because the AI can never be completely sure it has achieved it’s goal, and each additional paperclip produced increases the probability it has achieved the goal. Also, it could always invest more resources into additional backup system, defense, additional checks (recounting the paperclips), etc. It’s not that this problem can’t be solved at all. The point is it is much easier to convince oneself that one has found a solution than it is to actually find a solution. The same principles holds for all of the problems here presented.


What if we made an AI which does just what we want? The AI listens to our wishes, and sets them as it’s final goal. The problem is, our wishes can be “fulfilled” in ways we didn’t want them to be fulfilled. Some examples:

  • Final goal: “Make us smile”. Unintended result: Paralyze human facial muscles to form constant beaming smiles.
  • Final goal: “Make us happy”. Unintended result: Implant electrodes into the pleasure centers of our brains.
  • Final goal: “Act so as to avoid the pangs of bad conscience”. Unintended result: Remove the part of our brain that produces guilt feelings.
  • Final goal: “Maximize your future reward signal”. Unintended result: Short-circuit the reward pathway and clamp the reward signal to its maximal strength.


Let’s make AI follow the Asimov’s three laws of robotics. Take the first law: a robot may not injure a human being or, through inaction, allow a human being to come to harm. This would make AI very busy, since it could always take some action which would reduce the probability of a human being coming to harm. “How is the robot to balance a large risk of a few humans coming to harm versus a small risk of many humans being harmed? How do we define “harm” anyway? How should the harm of physical pain be weighed against the harm of architectural ugliness or social injustice? Is a sadist harmed if he is prevented from tormenting his victim? How do we define “human being”? Why is no consideration given to other morally considerable beings, such as sentient nonhuman animals and digital minds? The more one ponders, the more the questions proliferate.” – Nick Bostrom


What if we just say: maximize pleasure and minimize pain in the world? How do we define pleasure and pain?  This question depends on many unsolved issues in philosophy. It needs to be written in a programming language, and even a small error would be catastrophic. As Bertrand Russell said, “Everything is vague to a degree you do not realize till you have tried to make it precise.” Consider AI taking hedonism as it’s final goal, realizing simulated brains are more efficient than biological ones, and then maximizing the number of simulated brains, keeping them in an infinite loop of one second of intense pleasure. This simulated brains would be more efficient if they were simpler, so the AI reduces them as far as it can, removing memory, language, and strip the brain just to the “pleasure centers”. If the AI is wrong about what pleasure means, and which physical processes generate pleasure, the universe will be not be filled with pleasure, but with “processes that are unconscious and completely worthless—the equivalent of a smiley-face sticker xeroxed trillions upon trillions of times and plastered across the galaxies. – Nick Bostrom


So let’s say we be super careful about giving the goal to the AI and give it some super nice goal. We keep the AI’s capabilities limited, slowly increasing them, in each step making ourselves sure AI is not a threat by testing the AI behavior in some kind of “sandbox” controlled safe environment. As AI becomes more capable, becomes used in many domains of economy, makes less mistakes, and becomes more safe. At this point, any remaining “alarmist” would have several strikes against them:

  • A history of alarmists predicting harm from the growing capabilities of robotic systems and being repeatedly proven wrong.
  • A clear empirical trend: the smarter the AI, the safer and more reliable it has been.
  • Large and growing industries with vested interests in robotics and machine intelligence.
  • A promising new technique in artificial intelligence, which is tremendously exciting to those who have participated in or followed the research.
  • A careful evaluation of seed AI in a sandbox environment, showing that it is behaving cooperatively and showing good judgment.

So we let the AI into the wild. It behaves nicely at first, but after a while, it start’s to change it’s environment to achieve it’s final goals. The AI, being better at strategizing than humans, behaved cooperatively while weaker, and it started to act on it’s final goals only when it became strong enough it knew we couldn’t stop it. It’s not that this problem can’t be solved, it’s just that “each time we hear of a seemingly foolproof security design that has an unexpected flaw, we should prick up our ears. These occasions grace us with the opportunity to abandon a life of overconfidence” – Nick Bostrom


Imagine three scenarios:

  1. Peace
  2. Nuclear war kills 99% of the worlds population.
  3. Nuclear war kills 100%.

Obviously, we prefer 1 over 2 over 3. How big is the difference between these scenarios? The difference in the amount of people killed is very large between scenario 1 and scenario 2, but not so large between scenario 2 and 3.  More important is the difference in how bad these scenarios are. Here, the difference between 2 and 3 is much larger than difference between 1 and 2 – because if 3 comes to pass, it’s not only the present people that are killed, but also the whole future is destroyed. To put this into perspective, imagine what a shame it would be if we went extinct, for example, a 1000 years ago. Also, counting all human lives across space and time, much more lives, friendships, loves, and experiences in general would be lost in case of scenario 3, than in case of scenario 2. Even if we don’t leave Earth the total number of people to exist in the future could be high as 10^16, if 1 billion people lived on earth at a time, each person a 100 years, for a total of next 1 billion years. An argument can be made that reducing the probability of extinction by 0.0001% could be more valuable than lives of all people living on Earth today. The values become even more mind-boggling if we consider the number of 10^43 of lives, mentioned earlier in this post.


The problem of expressing human values in a programming language and placing them into AI is extremely hard. Also, it is arguably the most important problem in the world today. Also, we don’t know how to solve it. But that is not the end. In addition to that, human brains are really bad at thinking about this kinds of things. Just a couple of examples:

  • We like to reason from past examples. We don’t have past examples of greater-than-human AI, or any extinction event, so we underestimate it’s probability.
  • We think “we knew it all along” even if we didn’t know it all along, and in line with that, we think past catastrophes were more predictable than they actually were. We actually can’t predict as well as we think.
  • It’s hard for us to change our opinions. For people like me, who have formed an opinion that technology is generally good, and anti-technology people in general have had bad arguments in the past, it’s hard to hear about AI risks and take them seriously.
  • We have trouble with big numbers, for example, it’s the same to us if 2,000, 20,000 or 200,000 of birds get saved from drowning in oil ponds. The numbers involved in the future of humanity are extremely large.
  • We can measure the cost of preparing for catastrophes, but we can’t measure the benefits, so we focus on costs more. “History books do not account for heroic
    preventive measures.” – Nassim Taleb


This is just the beginning. You can read about the rest of the problems, and proposed solutions to those problems, in Bostrom’s Superintelligence book. The problem is harder than it looks, we don’t know how to solve it, and if we don’t solve it we will go extinct.

The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else. – Eliezer Yudkowsky

Human brains, if they contain information relevant to the AI’s goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format. – Nick Bostrom