12 Comments
User's avatar
Daniel Greco's avatar

Super sympathetic to the spirit of these examples, but it's my nature to pick nits.

On IVT, don't you think a lot of people are intuitively modeling these systems as discontinuous? I think cases you identify as failures to appreciate the IVT are really cases where you think people are wrongly modeling continuous systems as if they were discontinuous.

And on the taxation point, i'm not sure how rare it is for effective marginal tax rates to be greater than a hundred percent. When it happens, the main culprit is benefits that sharply cut off once you're above a given income threshold. I think it's generally recognized that that's a bad way to design benefit programs, but that doesn't mean it doesn't happen.

Expand full comment
Linch's avatar

Re your first points, I think it's confusing because I think the real answer is that discontinuous functions are continuous enough. Eg "votes" are a discrete quantity, you can't have half a vote. But I think it basically still works even though the math isn't perfect.

I'm not sure what level of precision is correct, but the core insight is something like "if a system transitions from state A to state B as you add inputs one at a time, some specific input must be the one that causes the transition." Which sounds even more banal when put that way! But ppl don't model reality in this way for some reason.

Re taxation, I believe this is true for benefits but not for taxes alone, at least in the American system. And certainly people have been confused about both! I dunno if it's worth digging into the empirics here.

Expand full comment
Daniel Greco's avatar

I definitely agree there's a deep mistake involved in thinking one vote/burger can't make a difference, but I've thought of it more along the lines of the informal gloss you just gave than the IVT. Basically, I've thought the right model is that in both cases you have a small chance of making a big difference, such that EV really depends on the details. (I think the voting case is interestingly different from the meat case because in the meat case all burgers are essentially on a par as far as likelihood of being a tipping point--at least given the info of a typical consumer--whereas with votes, at least in US, you have swing states vs non swing states.)

Expand full comment
Linch's avatar

"but I've thought of it more along the lines of the informal gloss you just gave than the IVT"

I think this is related to the general concept of unknown knowns!

I think it's very uncommon for math-y beliefs that become part of your worldview to still stay in your head as a math-y belief; unless maybe you're a professional mathematician or something. It's much more common that they just become naturally integrated (same thing with the idea of differentiable functions being locally linear, most people's actual experience with probability and Bayes theorem, the pigeonhole principle, etc)

Expand full comment
Linch's avatar

Yeah the actual probabilities are very different between swing states and non-swing states, at least for presidential elections. Local elections matter too of course, but not nearly as much, for most of the things I care about.

Expand full comment
Felice's avatar
17hEdited

I actually don’t see how IVT applies in these examples. It’s not a matter of the domain being “only approximately” continuous; that’s not really a problem, in that one could go from the integers (discrete voters, burgers, etc) to the reals (arbitrary fractions of them) via interpolation. The issue is that a “tipping point” in this context is conventionally and literally a discontinuity. What we have here are step functions: for every n burgers demanded, an extra pallet of chickens (corresponding to K number of chickens) is produced; ie, (0,n] burgers maps to 1 pallet, (n,2n] maps to 2 pallets, (2n,3n] maps to 3 pallets, etc. Similarly w/malaria nets. I submit that 1) it’s highly unnatural for ppl to think that they may individually contribute to such a jump; and 2) they don’t actually think that way — they recognize they make a tiny difference but simply don’t care.

The solution would seem to be to do away w/IVT altogether, eliding the existence of these large jumps and instead emphasizing the small ones (bc in fact none of this is truly continuous!): “Hey, you individual rational agent, however infinitesimal you may think it is, you make a whole-ass increment of change w/your choice, so choose wisely!” Individuals know that however many chicken tenders they consume corresponds to an entire chicken life, and that a penny corresponds to however many square inches of a malaria net; it’s just a question of how much those things matter to them when they know the actual numbers. Likewise w/voting: in a setting where there are 2 candidates and 2N-1 voters, the N-th vote for a candidate will flip the switch for them (again a step function, 0 loss to 1 win); I think people just have a hard time wrapping their heads around being 1/N fraction responsible for a flip. Not to go all evopsych theorizing but I’d assume we mostly evolved to think of cause and effect of our own efforts as incremental — so indeed continuous or approximately so; and this sometimes falls apart in systems (society) where decisions are made in big jumps (and someone technically triggers the jump)…and sometimes it doesn’t fall apart but we just are indifferent in ways that others find objectionable.

Expand full comment
Matt Reardon's avatar

IVT and ToM are very nice formalizations of thing I was grasping at in a post from earlier this year. Predictably outclassed by you! https://open.substack.com/pub/frommatter/p/the-world-is-not-fake-and-your-actions?utm_campaign=post-expanded-share&utm_medium=web

Expand full comment
Roman's Attic's avatar

“Example 2: People often have the intuition that altruists should be more careful with their money and more risk-sensitive than selfish people, even though the opposite is true. Altruistic people care about global welfare, so zoomed out, almost any individual altruist’s donation budget is linearly good for the world at large.”

There’s a pretty good post on the EA forum arguing that altruists should still be risk averse investors (https://forum.effectivealtruism.org/posts/cf8Dth9vpxX9ptgma/against-much-financial-risk-tolerance) . It says a lot of things, but one of the main arguments I retained from it was that the success of your investments are not fully independent of the success of other donors in the market, meaning that if the stock market performs poorly, all donors who keep their investments in stocks will be able to donate less. If this were to happen, there would be a greater marginal benefit to your smaller dollar amount being donated, meaning that risk-averse savings can be higher in expected utility.

I’m not an expert in the math here, but I think the optimal way to invest is to be exactly as risk-averse as you should be if you were the sole donor for global charities, assuming that every other donor follows that same pattern. If people are skewed toward personal EV maxing or risk aversion, you should take your risks in a way such that it moves the total market risk level towards the optimal level.

Expand full comment
Linch's avatar

Right, the actual empirics are nuanced due to correlations with other altruists.

"I’m not an expert in the math here, but I think the optimal way to invest is to be exactly as risk-averse as you should be if you were the sole donor for global charities, assuming that every other donor follows that same pattern. If people are skewed toward personal EV maxing or risk aversion, you should take your risks in a way such that it moves the total market risk level towards the optimal level."

You can do better, in theory! Specifically if you know that other altruists are biased towards a specific risk, you can take actions that are uncorrelated with or negatively correlated with that risk. Concretely, you can, buy stocks that are less correlated with Anthropic and Meta's expected performances, or anticorrelated. (harder to do in practice than theory of course, for various reasons including but not limited to inside views on Anthropic)

Expand full comment
Ali Afroz's avatar

I easily get most of the things on your list, but I’m really not sure why the fact that any curve if you zoom enough, will look like a straight line implies that you should not buy insurance on Small products. Could you explain how the one follows from the other? Especially because I’m really not sure why utility function would behave so differently around large losses of utility like life, compare to small losses of utility like a phone.

Expand full comment
Linch's avatar

The y axis is utility, the x axis is money

Expand full comment
metafora's avatar

I can't understand the author's reasoning well enough to disagree with it, but there's an entirely different reason which is more straightforward: you and yours aren't in a position to net the actual value of many lives versus life insurance payouts, whereas you can net smaller insurance choices (so actual value =~ EV, and you shouldn't buy them).

Expand full comment