Unknown Knowns: Five Ideas You Can't Unsee
A holiday appreciation for small ideas
Merry Christmas and Happy Holidays to those who celebrate, and a Tolerable Thursday to those who don’t! Let’s all give our brains a rest and read a post on simpler ideas!
There are a number of implicit concepts I have in my head that seem so obvious that I don’t even bother verbalizing them. At least, until it’s brought to my attention other people don’t share these concepts.
It didn’t feel like a big revelation at the time I learned the concept, just a formalization of something that’s extremely obvious. And yet other people don’t have those intuitions, so perhaps this is pretty non-obvious in reality.
Here’s a short, non-exhaustive list:
Intermediate Value Theorem
Net Present Value
Differentiable functions are locally linear
Grice’s maxims
Theory of Mind
If you have not heard any of these ideas before, I highly recommend you look them up! Most *likely*, they will seem obvious to you. You might already know those concepts by a different name, or they’re already integrated enough into your worldview without a definitive name.
However, many people appear to lack some of these concepts, and it’s possible you’re one of them.
As a test: for every idea in the above list, can you think of a nontrivial real example of a dispute where one or both parties in an intellectual disagreement likely failed to model this concept? If not, you might be missing something about each idea!
The Intermediate Value Theorem
Concept: If a continuous function goes from value A to value B, it must pass through every value in between. In other words, tipping points must necessarily exist.
This seems almost trivially easy, and yet people get tripped up often:
Example 1: Sometimes people say “deciding to eat meat or not won’t affect how many animals die from factory farming, since grocery stores buy meat in bulk.”
Example 2: Donations below a certain amount won’t do anything since planning a shipment of antimalarial nets, or hiring a new AI Safety researcher, is lumpy.
Example 3: Sometimes people say that a single vote can’t ever affect the outcome of an election, because “there will be recounts.” I think stuff like that (and near variants) aren’t really things people can say if they fully understand IVT on an intuitive level.
The core mistake? People understand there’s some margin where you’re in one state (eg, grocery store buys 2000 pounds of chicken) and some margin where you’re in another state (eg, grocery store buys 3000 pounds of chicken). But without the IVT, people don’t realize there must be a specific decision someone makes that tips the situation from the first state to the second state.
Note that this mistake (IVT-blindness) is recursive. For example, sometimes people understand the reasoning for why individual decisions might matter for grocery store orders but then don’t generalize, and say that large factory farms don’t make decisions on how many animals to farm based on orders from a single grocery store.
Interestingly, even famous intellectuals make the mistake around IVT. I’ve heard variants of all three claims above said by public intellectuals.1
Net Present Value
Concept: The value today of a stream of future payments, discounted by how far away they are. Concretely, money far enough in the future shrinks to nearly nothing in present value, so even infinite streams have finite present value2.
Example 1: Sometimes people are just completely lost about how to value a one-time gain vs benefits that accumulate or compound over time. They think the problem is conceptually impossible (“you can’t compare a stock against a flow”).
Example 2: Sometimes people say it’s impossible to fix a perpetual problem (e.g. SF homelessness, or world hunger) with a one-time lump sum donation. This is wrong: it might be difficult in practice, but it’s clearly not impossible.
Example 3: Sometimes people say that a perpetual payout stream will be much more expensive than a one-time buyout. But with realistic interest rates, the difference is only like 10-40x.
Note that in many of those cases there are better solutions than the “steady flow over time” solution. For example, it’d be cheaper to solve world hunger via agricultural and logistical technology improvements, and perhaps economic growth interventions, than the net present value of “feeding poor people forever.” But the possibility of the latter creates an upper bound for how expensive this can be if people are acting mostly rationally, and that upper bound happens to be way cheaper than current global GDP or wealth levels.
Differentiable functions are locally linear
Concept: Zoom in far enough on any smooth curve and it looks like a straight line.
Example 1: People might think “being risk averse” justifies buying warranties on small goods (negative expected value, but shields you from downside risks of breaking your phone or something). But this is not plausible for almost any realistic risk-averse utility function, which becomes clear once you realize that any differentiable utility function is locally linear.
Example 2: People often have the intuition that altruists should be more careful with their money and more risk-sensitive than selfish people, even though the opposite is true. Altruistic people care about global welfare, which is a large function, so zoomed in, almost any individual altruist’s donation budget is linearly good for the world at large.
Example 3: People worry about “being pushed into a higher bracket” as if earning one more dollar could make them worse off overall. But tax liability is a continuous (piecewise linear) function of income. No additional dollar in income can result in greater than one dollar of tax liability, other than very narrow pathological cases.
Understanding that differentiable utility functions are locally linear unifies a lot of considerations that might otherwise confuse people, for example, why one sometimes ought to buy insurance for health and life but almost never for small consumer products, why altruistic people should be more risk-seeking with their investments, why bankroll management is important for poker players, etc.
Grice’s maxims
Concepts: Grice actually has four maxims:
Quantity (informativity): Say enough, but not more than needed.
Quality (truth): Only say what you believe to be true and can be supported.
Relation (relevance): Be relevant.
Manner (clarity): Be clear, brief, and orderly.
I think disputes where one or both sides don’t follow each of Grice’s maxims should be fairly self-explanatory.
Many forms of trolling break one or more of these maxims, but not all of them. For example, a gish gallop is breaking the maxim of informativity. Bringing up Hilary Clinton’s emails, or the last Trump escapade, in an otherwise non-political discussion is breaking the maxim of relevance. The bad forms of continental philosophy often break the maxim of manner, which is why many analogize their writings to trolling. And of course, many trolls lie, breaking the maxim of quality.
For a longer, and somewhat ironic, meditation on the importance of Grice’s maxims, consider reading my earlier post:
The Pig Hates It
People on the ‘net often like to quote the George Bernard Shaw line “Never wrestle with a pig because you both get dirty and the pig likes it.”
Theory of Mind
Concept: ToM has many components, but the single most important idea is that other people are agents too. Everybody else has their own goals, their own model of how the world works, and their own constraints on what they can do.
Example 1: Sometimes people ascribe frankly implausible motivations to their enemies, like “Republicans just hate women”, “Gazans don’t care about their children,” “X group just wants to murder babies” etc.
Example 2: Sometimes people don’t even consider that their enemies (and allies, and neutral third parties) even have motivations at all. The Naval War College Historian Sarah Paine calls this “half-court tennis”: sometimes US government officials and generals think about war and peace in relation solely to US strategic objectives. They don’t even consider that other countries have their own political aims, and do not primarily define their own politics in relation to US objectives.
Example 3: Do you often feel like characters in a novel seem “flat?” Like they’re characters who think they should be characters in a novel to advance a narrative point, not fully-fleshed out people with their hopes and dreams.
The core idea is very simple: treat other agents as real. It sounds banal, until you realize how rare it can be, and how frequently people mess up.
I think a full treatise on a theory of mind failures and strengths is worthy of its own blog post, and that’s what I’m working on next! Subscribe if you’re interested! :)
Why this all matters
Well, first of all, I think all of the concepts above are important, and neat, and it’d be good if more of my readers know about them!
More importantly, I think ideas matter. I deeply believe that ideas are extremely important and behind much of civilizational progress (and backsliding).
This is one of the central themes of this blog: ideas matter, and if we try harder and work smarter, if we approach every problem with simultaneous dedication and curiosity, together we can learn more ideas, integrate them into our worldviews, and use those ideas to improve our lives, and the world.
I don’t just mean big, all-encompassing, ideological frameworks, like Enlightenment or Communism. I also don’t just mean huge scientific revolutions, like evolution or relativity.
I mean small ideas, simple concepts like the ones above, that help us think better thoughts and live better lives.
I’m interested in a category I think of as Unknown Knowns: concepts that, once acquired, feel less like models you learned and more like obvious features of reality. They’re invisible until you have them, and then, once acquired, almost impossible to unsee. So you never truly notice them.
Today, almost 2000 years after some Jewish dude was nailed to a tree for championing the idea of how great it would be to be nice to people for a change, I want to actually see these ideas again. I want to take some time to appreciate all the ideas that have made my reality better, and all the people who made sacrifices, great and small, to find and propagate those ideas.
Merry Christmas.
Needed to purchase a last-minute gift? Interested in learning and spreading a publication chronicling some of the greatest ideas ever? Consider buying a gift subscription to The Linchpin for a loved one, yourself, or a former enemy !
More subtly, Derek Parfit is arguably the single most original ethicist in the second half of the 20th century. Yet, his discussion of “imperceptible torture” in Reasons and Persons is probably not compatible with the Intermediate Value Theorem.
The actual math here has to do with summations of geometric series, which is not worth getting into here but is fairly intuitive for those who want to study up.


Super sympathetic to the spirit of these examples, but it's my nature to pick nits.
On IVT, don't you think a lot of people are intuitively modeling these systems as discontinuous? I think cases you identify as failures to appreciate the IVT are really cases where you think people are wrongly modeling continuous systems as if they were discontinuous.
And on the taxation point, i'm not sure how rare it is for effective marginal tax rates to be greater than a hundred percent. When it happens, the main culprit is benefits that sharply cut off once you're above a given income threshold. I think it's generally recognized that that's a bad way to design benefit programs, but that doesn't mean it doesn't happen.
“Example 2: People often have the intuition that altruists should be more careful with their money and more risk-sensitive than selfish people, even though the opposite is true. Altruistic people care about global welfare, so zoomed out, almost any individual altruist’s donation budget is linearly good for the world at large.”
There’s a pretty good post on the EA forum arguing that altruists should still be risk averse investors (https://forum.effectivealtruism.org/posts/cf8Dth9vpxX9ptgma/against-much-financial-risk-tolerance) . It says a lot of things, but one of the main arguments I retained from it was that the success of your investments are not fully independent of the success of other donors in the market, meaning that if the stock market performs poorly, all donors who keep their investments in stocks will be able to donate less. If this were to happen, there would be a greater marginal benefit to your smaller dollar amount being donated, meaning that risk-averse savings can be higher in expected utility.
I’m not an expert in the math here, but I think the optimal way to invest is to be exactly as risk-averse as you should be if you were the sole donor for global charities, assuming that every other donor follows that same pattern. If people are skewed toward personal EV maxing or risk aversion, you should take your risks in a way such that it moves the total market risk level towards the optimal level.