(Numbers in this post are made up to illustrate a point. Likewise the plot device need not be cryptocurrencies, that’s just flavor text.)
You’re walking on the street, minding your business. Suddenly, a stranger (S) comes up and starts talking to you:
S: Have you heard? There’s a one-in-a-million chance you could save a billion lives! All you have to do is found a crypto currency, become extremely rich, and use that money to fight Malaria!
You think this person has lost it, but you feel awkward just walking past them, so you (Y) try to engage kindly:
Y: I haven’t checked your math, but I don’t want to found a crypto currency. I’m not really sure what I want to do yet actually.
S: I promise my math is right. You can check it later with this list of how many times people have tried and failed, or succeeded, at becoming rich with cryptocurrencies, along with GiveWell’s estimates of how many lives you can save for a hundred billion dollars.
Y: Even assuming your math is right, it’s not obvious I should drop everything and do this.
S: But think of the impact! The odds are low, sure, but the impact if it works is enormous! You can’t just say no to that.
This is a classic case of Pascal’s Mugging. You’re presented with an opportunity that has very low odds of success, but such high value that the expected value is still better than whatever you’re currently doing.
There are lots of debates about whether or not you should change course when presented with a case like this. That’s not the point I want to get at, so I’m going to assume that you do in fact want to maximize expected value (EV) and hence would change course if mugged in this way. Rather, I want to point out that there are many low-probability options and some of them could be even better than whatever the stranger has proposed!
There Are Many Options
When probabilities get small, the number of options could get very large.
Say your goal is to help alleviate poverty. One way you can do this is by finding the highest-paying job you can get, then donating the bulk of your income. That’s a great approach, and if you’re a professional in a developed country it has high odds (90+%) of letting you donate many thousands of dollars each year. The odds are so good mainly because it’s a simple plan.
Another way you could alleviate poverty is by convincing the US government to spend its foreign aid budget more effectively. This seems much less likely to work, but the US spends tens of billions on aid each year so the odds don’t have to be high to beat out your working-and-donating plan. Even 1/100,000 odds give a huge expected value.
What can happen at 1/100,000 odds? You could:
- Volunteer for a senate campaign and hope to rise far enough in the campaign to influence policy.
- Get to work at the state department with a plan of spending just a few minutes a year with the secretary of state to influence their aid recommendations.
- Hang out in bars in the DC area hoping to befriend senate staffers and convince them to push for more effective aid spending.
- Start a hedge fund, get rich, and hire a lobbyist.
- Apply for a PhD program in political science, get hired by a think-tank, and write white-papers on how aid efficacy matters.
And so on. The point is: your plan is allowed to have more steps, assume more things go right, and generally be more complex the lower the odds you’re willing to accept.
An important consequence of this is that you can’t possibly evaluate all the options at low odds. This is one of the curses of bounded rationality. So if someone says
I have this great plan that has a 50% chance of working, and if it works it saves a thousand lives!
your intuition is probably that this is a great idea, and that’s right! You should probably consider this very carefully because there are very few opportunities like that, and if they’re right it could be incredibly important that you do this. On the other hand, if someone says
I have this great plan that has a one-in-a-million chance of working, and if it works it saves a billion lives!
you should try to remember that they can’t possibly have evaluated even a small fraction of the one-in-a-million schemes to save a billion lives. So they’re necessarily neglecting a lot of the space, and it’s possible there’s something way better out there that you’d block off by taking the path presented.
Relation to Cause Choice
This advice doesn’t really apply to picking what cause to work on, it’s more about the way you go about having an impact once that choice is made. While there are a good number of choices of cause area (e.g. AI safety, nuclear safety, global development, climate, and so on), this cause list is vastly smaller than the set of one-in-a-million paths you could take to address any given cause. It is much more possible to systematically possible cause areas than about paths within causes, so the lesson that there are many options is much more about paths than about causes.
When considering what to do next it’s important to consider opportunity costs, which are intrinsically about the other paths that you choose not to walk. When you require high odds of success you don’t have many options, and can usually enumerate them all and consider them all carefully, which builds an intuition that you should pick the best path that you’re aware of.
This intuition fails when you accept low odds. There are unimaginably many one-in-a-million paths to success in almost any venture, and there’s no reason to think that the paths you’ve encountered are anywhere near the best. The more unlikely the path you’re considering, the more alternatives you want to consider, and the less you want to be swayed by what happens to get your attention.
Moreover at low odds no matter how much you explore you’ll probably be leaving something better on the table. That is okay! At some point you have to stop exploring and start acting, and considering every possible course isn’t a real choice you have anyway.