Do Saints Exist? Or, Cowardice with Excellent PR
The matter at hand: am I a saint, or an ape who is excellent at fooling himself? S defends the former, A the latter.
A: Why don’t you donate more to charity? I’m not even asking for all your worldly goods, just 10%, a reasonable amount, a Schelling point distinguishing between real sacrifice and mere self-satisfaction.
S: It might be better for the world if I try to maximize my own resources. There are other ways to help the world than charitable contributions, and they often involve large amounts of capital. I could use my savings to go to graduate school, and participate more directly in the Project. I could start a business, and end up with way more money to donate later. There’s also the possibility of some catastrophe where I need my capital as a fallback.
A: In what apocalypse would a pile of cash be useful?
S: There’s a small chance the United States will become a high-corruption country where cash…
A: Quite a small chance. I’m pretty sure you’re actually thinking about a personal apocalypse, like an onerously expensive medical condition or a work-prohibiting disability (which in your field would probably be a mental problem).
S: You can’t contribute much if you’re bankrupt.
A: Which is a small risk. The benefit of donating now instead of later, when you consider the declining QALY/dollar value of contributions, far outweighs it.
S: But consider Talebian iterated-game ruin risk. There’s a very small chance that, in a given year, a dynasty will suffer catastrophe. But repeated over generations, the dynasties that remain alive and able to contribute will be the ones who had the best survivability of the inevitable negative events. Thus the dynasties that continue to be healthy generation upon generation are the ones who contribute little.
(S and A both turn aside and glare at the idiots who treat Social Security disability insurance as a mere starvation prevention service and not actual insurance.)
A: Of course the system should be better. But systems are for apes, not saints. You can go to graduate school on scholarship; you can rely on outside investment for your (fantasy) entrepreneurial projects. If you can’t get funding, take the outside view: that implies your own time investment is not worthwhile.
S: (Sighs in frustration.) I promise, I don’t want to buy anything expensive and unnecessary with my money. It’s just really easy to imagine obscure scenarios where I would regret not having deep pockets due to excess generosity.
A: Who knew Greed was such a creative? Much better than whatever instinct is responsible for stopping you from city biking in the dark without reflective gear.
S: One time!
A: One time with grave tail risk, and trivial upside. You are quite capable of taking risks, S, as long as it is to the benefit of your immediate comfort. You seem to be “prudent” only when other people’s lives are at stake.
S: That kind of risk-taking should be reduced, not…
A: It doesn’t matter how impossible your password is to guess if you’re facing a phishing attack. Think orthogonally. What is the most likely cause of catastrophe? Marginally less cash on hand, or a rogue car?
S: People work tirelessly to reduce accidents and diseases for a reason.
A: But before they finish, what do you do? Your application of Taleb is entirely wrong! You’re applying his fears about society to an individual. You risk your life, slightly, all the time, when you give to charity or simply walk outside. It’s a risk worth taking, because there’s an even graver risk on the other side: a long life of nothing.
S: It guess I can vaguely imagine that..
A: It is possible that you will be of great benefit to society. So a personal catastrophe can be, in a sense, a societal catastrophe. But the same is true of others. You must not only insure yourself, but every other individual, for everyone poses a potential great upside that must be protected. Of course it is reasonable to pay attention to yourself at first, but cooperative mechanisms require initial upfront unilateral gifts without guarantee of reciprocation. Remind me, what was the “generous tit-for-tat” algorithm’s rate of unrequited cooperation?