Open question to all readers: How do you balance parsimony vs nuance in your own thinking, in that of academic theory, and in various sides of academic and practical disputes?
I often see calls to both register as "applause lights"[1]. Where in some contexts people would only see arguments for nuance, or to "complexify the situation" or "the truth is complicated" etc but nobody gives an argument for simplicity. In other contexts (more implicit than explicit) people would say the truth is simple or give arguments for the virtues of parsimony. But don't acknowledge the costs of simplicity.
It's clear to me that this is a tradeoff, and many points along the tradeoff are defensible. But I also don't want this comment of "nuance about nuance" to be a Wise Saying or something you nod along to. It's a practical question: how do you balance parsimony vs nuance? What are guidelines you use? What are practical tradeoffs you've made in your academic or professional work, and/or daily life?
I'm not aware of a specific treatment in favor of nuance but I see enjoinders to nuance all the time in the middle of other discussions. I'm also curious if readers have other sources they'd like to point to.
When judging 2 conflicting theories, we prefer the simpler one.
but in trying to answer the total picture of a scenario, the truth is complicated.
Like, let's give an example. suppose we want to answer why pizza pies are circular.
Without doing any research suppose you are offered the following 3 explanations
1. circles are an easy shape to make
2. Circle starts with "c" which is the third letter of the alphabet. Pizza was invented by 3 brothers so they choose a shape that corresponds to the number 3
3. It's an easy shape to make and also was similar to other types of pies so the first pizza makers were already used to it and choose it for pizza. Also attempts to switch to other shapes were latter rejected because they had an easy way of dividing a circle into 8 equal slices which was harder to do for some other shapes.
1 is simple, and 2 and 3 are complicated. But 2 conflicts with 1 whereas 3 does not.
If I had to assign a probability to each I would assign 1 the highest probability (since it's simpler than 2 and since 3 implies 1, 1 must have a higher probability)
however, If I had to answer "which explanation is more likely closer to the complete story of why pizza is this shape" I would answer 3. Since I expect the complete story to be more complicated. Since it has to account for all the people involved and explain there motivations.
So it's a difference if your trying to answer "which of several exclusive theories is true" VS
"which of several non-exclusive theories is more likely to fully explain a phenomena"
Hey! Enjoyed talking with you live the other day, Linch. I’m a philosophy professor with interests in math, political science, and economics.
One thing I’ve been pondering lately is the loss of community in the US. (I know less about the problem in other countries, though would like to learn.) It seems to me like a major force driving many political and economic trends. But it’s also a bit slippery to measure “social capital.” And there are so many things one could mean by community. Friendship? Trust? Interpersonal knowledge? Shared norms? Etc.
So I find this all very new and exciting to think about.
Thanks! The pleasure was mutual. I hope we can speak again!
Here’s a lens I’m curious about.
1) Descriptively, to what extent do you think the loss of community was just a product of intentional choices? It seems like when people have the option for more vs less community, people mostly go for lower (and when people make the reverse choice, like converting to Mormonism, or, to a much lesser extent, practicing communal living, it’s notable).
2) Normatively, how much community is desirable, relative to the benefits of individualism and individualist culture? It’s good to be valued by others, but it’s also good that e.g. others don’t choice to have much of a say in my sexuality, who I worship, what clothes I wear etc.
See Sarah Constantin on both, but especially the normative aspect:
As a vegan with a background in philosophy, I often find myself pondering animal ethics. Lately, I have been more specifically thinking about taxidermy. Is taxidermy unethical?
On the one hand, humans sometimes donate their bodies to science when they die, and funerals with an open casket are common. With humans, it does not seem as strange to preserve the body. In fact, it is part of a cultural practice of mourning loved ones.
On the other hand, those bodies donated to science were done so with the consent of the deceased. Animals cannot communicate with us in a way that gives consent; the inability for animals to consent is a part of why I am vegan, not vegetarian. Although, historically speaking, discomfort with human corpses seems relatively new. In the 19th century, post-mortem photography was a common way to express mourning and keep memories of the deceased.
(I have other thoughts about how this relates to questions of physicalism, materialism, and immaterialism, but those are harder to articulate and I need to think about them more.)
One question re: taxidermy (and other ways of preserving bodies, or actions with animal bodies in general) is: through taxidermy, *who* are we wronging? Is it the dead animals themselves? Currently living/future living animals who might be wrongfully killed? Some spiritual pact with nature? Ourselves? And is it appropriate to solely think of these questions in terms of the wronged?
(of course, dead humans cannot consent to be taxidermied or donated to science either, which seems relevant here).
Another question I have is, as the only student/scholar of ancient Greek and Roman philosophy I know, do you have a guess to what historical philosophers would say on this question, and why?
1. Huh I didn't know Oxford still had philosophers working on questions like digital minds and anthropics! Feel free to not answer, but how did you survive the FHI and GPI cullings?
2. Re digital minds governance and how it might go wrong, what do you think is the value of thinking about these questions now vs later?
1. The university ultimately opted to honor my on contract.
2. I think the main advantage of later thinking is that we'll know more about the science of moral patiency indicators, what the types of AI systems will be prevalent, and the governance landscape.
On the other hand, later thinking may be too late to influence key decisions.
I expect AI governance to become less malleable over time. So, there's a risk that a window for influencing it will pass.
Achieving and diffusing strategic clarity about digital minds will take time. So, I think thinking that generates that clarity should happen well in advance of crunch time.
A lot of people have not yet formed views about digital minds. Earlier work on the topic may help people form more reasonable views. That could be intractable later if the topic becomes politicized.
I also think that not creating digital minds (at least in the near term) is a particularly promising approach that's a convergent interest of a diverse range of actors. But the availability of that approach may be fleeting. So thinking now about how to make that happen seems preferable to delaying.
3. I think I found your blog while was scrolling through blogs others subscribe to, happened to click on yours, and saw that you post on topics I'm interested in. (I don't remember whose subscription list I found yours on though.)
Hi Linch! I’m an attorney with a hobbyist’s interest in politics and the philosophical underpinnings of ideology. Much of the project of my blog is devoted to laying a philosophical groundwork for a set of future political writings that I have yet to put in publish-able shape.
The reverse AMA is a very cool idea, not least because I love any opportunity to talk about myself. Thrilled to answer any questions you might have.
Regarding your open question about tradeoffs between parsimony and nuance, I see this primarily as a question of resource allocation: a nuanced understanding of a topic is expensive (there is only so much time in the day), whereas a parsimonious explanation that is right 90% of the time is perfectly adequate except in high-stakes scenarios. I think a person should strive for a nuanced understanding of theories and explanations that pertain to their day-to-day concerns (what they do for a living, how they spend their free time, who they spend it with), but I’m not especially sympathetic to calls for nuance as a generalized principle, as, in any area of inquiry, there is always more nuance to be had: I think anyone appealing to nuance should generally do so by pointing to a specific nuance relevant to some point, and they owe some explanation of the risks associated with overlooking it.
Here's a question: You say "don't live as a utilitarian" but really the article seems more about systematized morality in general. You examples of people who are in your analogy "moral scientists" probing the bounds of morality and "moral astronauts" who happen to work in exigencies, and say that non-commonsensical morality does not apply to the rest of us. How would you respond to people who say it does? For example, morality sure seems like it makes demands of my diet, donation choices, and voting behavior! I'm also curious if you have a response to https://forum.effectivealtruism.org/posts/Dtr8aHqCQSDhyueFZ/the-possibility-of-an-ongoing-moral-catastrophe-summary , which argues, forcibly in my book, that we are likely in the middle of one or more moral catastrophes.
I think it’s absolutely a live possibility that we are in the midst of one or more moral catastrophes. Probably several. Arguments to this effect, made by “moral scientists,” should ground the formation of new moral principles that can then be interpolated into our culture’s commonsense social morality. These people should be arguing for new principles, on the grounds that our society’s current principles are inadequate; this is precisely the sort of work that our moral scientists should be doing.
My view is that developing these sorts of arguments and figuring out how they would cash out, as principles, on the day-to-day terms of real-world society is a specialized task; it is a niche that our moral scientists occupy. It is hard to do; it is at least as hard to make a good moral principle, as it is to make a good shoe. I think morality makes strong demands on our diet, donation choices, and voting, and the people who are figuring out what norms society should have in these areas are best suited by doing so on utilitarian terms.
Where I think utilitarianism is less useful is on the mundane, terrestrial level, further from the moral “frontier”: it’s not obviously productive to conduct a full-blown analysis of whether one should rob a bank, or have kids as a teenager, or cheat on their spouse, every time one considers whether to do so, since the “rules of thumb” we employ in society provide simple & effective answers to these questions. And there is only so much time in the day, and these analyses are hard to do well if we are doing them “from scratch.”
I'm a lapsed anthropologist. I did a very Chinese Anthropology/Ethnology MA in Guizhou with lots of fieldwork expeditions (mainly getting drunk with village cadres) around the province and in Laos, mostly with Hmong/Miao, and Buyi minorities.
Over the past 4 years I've jumped between very typical EA fields (animal welfare/alt proteins; GH&D grantmaking; AI Governance; community building) in a very "jack-of-all-trades" fashion.
I've been thinking about these things (my blogpost drafts I'm stuck on):
1) Are "circuit breakers" for factory farming in poor countries possible?
2) Star Citizen is the Weirdest Economic Phenomenon Ever
3) If There Is A God, Sadly He Is Evil
4) Why are we not really getting happier?
Also have a 8-month-old baby, and moderately high p(doom) (20% ish), so thinking a lot about child-rearing in the end times lately.
I. Are you ethnically Chinese? Did you have to learn non-Putonghua Mandarin, or other non-Mandarin languages for your anthropology work?
II. I'm interested in 1) and 2)! Especially 1). Some more animal welfare stuff on this corner of substack could be fun!
III. 3) seems really obvious to me unless you posit a very large multiverse (https://slatestarcodex.com/2015/03/15/answer-to-job/) so probably not worth getting into, I doubt you'd persuade anybody who doesn't already believe it.
IV. Is 4) true? It seems like richer countries are happier than poorer countries.
I. I'm not ethnically Chinese. I learned Putonghua in my undergrad. I learned (White) Hmong (which my supervisor also spoke) in a more structured way using old French dictionaries, and a local teacher (both in Laos and China), and got pretty good. I had to learn Guizhou (mainly Qiandongnan) dialect to communicate with older people who couldn't speak Mandarin, but I never learned it systematically, and still sound pretty amateurish when I speak.
II. Thanks! I'll try to write 1). I'll try to use your interest as an incentive to finish it.
III. Yeah, 3) was mainly being "nerd-sniped" by Benthams Bulldog's re-popularising of the anthropic argument. I'll use your disinterest as a disincentive to finish it.
IV: Somewhat controversially, I am using the "we" to refer to the global top 20% or so. I agree that richer countries are happier than poorer countries (importance of treating malnutrition, and pain relief etc. on well-being is incredible). But, once we get rich, increased wealth and technological progress seem to have almost no tangible impact on happiness at a population level (the debate is more about whether it's 1% improvement per decade or a flatline). So the question we should be asking is: "Why are we so crappy at converting wealth and technology into happiness?" I think there are some obvious answers, some interesting/unexpected ones.
Thanks! I'm curious, as a non-native Chinese speaker (presumably a native English speaker), was learning Hmong easier after learning Standard Mandarin, or was it similarly/more difficult?
I appreciate II. and III. Of course, I'm just one person (and giving you advice in the context of "cheap talk") so don't take my advice too seriously!
Re IV, happy to restrict our conversation to the global top 20% or so! I still think it's very plausible people have gotten a bunch happier but it's hard to measure, but regardless, agree that we can do so much more!
"So the question we should be asking is: "Why are we so crappy at converting wealth and technology into happiness?" I think there are some obvious answers, some interesting/unexpected ones."
Hmong was significantly easier after having learned Mandarin. Sentence structure and grammar is very similar. Although it's a different language family, there are some Chinese loan words which help you get started: e.g. tabsis from 但是 (But); yaj ywm from 洋芋 (Potato), niam from 娘 (Mum) (the final consonant is the tone marker).
Learning one tonal language also makes other tonal languages way easier. Hmong (especially as spoken in Southeast Asia) is actually a more "pure" tonal language than Mandarin, more like a South-East Chinese dialect or Vietnamese. So, unlike Mandarin, you learn the tone structure in a textbook and normal people actually use the tones in this way, which is refreshing! It's also purely syllable-timed, so words are very distinct and easy to separate. And, of course, everything is romanised in learning materials. So it was an easier jump from Mandarin to Hmong than (I imagine) the other way around!
I enjoyed the "Rising Premium for Life" post. One of your best!
I'm a philosophy professor who likes to think he would've been an economist, but realistically would've been a lawyer, if I hadn't gone into philosophy.
I'm also married with 4 kids--10, 8, just under 3, and just under 1.
3 questions (feel free to answer all or none of them!)
1. How has having children influenced your work? Has having multiple children influenced any of your academic insights on the nature of knowledge formation, how knowledge defines justification, or any other of your epistemological insights or commitments?
2. You seem to be a very successful philosopher! Are you happy with this life? Relatedly, I have friends considering philosophy graduate school. Do you have thoughts on what considerations they might be under-emphasizing, or other sage advice? (Feel free to drop a link if you've already written or talked about it elsewhere! And again, feel free to not say anything)
3a. The epistemological literature is very vast, so apologies if I'm just very ignorant here. But do you think the ontology I laid out here for epistemological frameworks (https://linch.substack.com/i/170869049/four-failed-approaches) is basically correct? In brief, almost all views of epistemological frameworks for ways of knowing fall under monism (One True Way), pluralism (all ways are good), or nihilism (either truth is not real, or we can't have access to it)?
3b. And if my read of the literature is correct, then I have a pretty obvious follow-up question on why professional epistemologists don't like to develop hierarchal views like mine! Even if they disagree vastly on where things are in the hierarchy; I think my views represent "normal" intuitions for truth-seeking much better)
1. I don't think having children has influenced my academic work, except to slow it down. Life has tradeoffs! On my deathbed I doubt I'll regret having fewer publications or a smaller H-index than I might otherwise have had, but I do think I'd miss grandkids. That said, part of what I like about Substack is the opportunity to stretch out a bit. While I don't really see how to work thoughts about parenting into my professional work, I have talked about it a bit in Substack posts.
2. I am very happy with the academic life, but I tend to discourage others from pursuing it, for all the familiar reasons. The job market is very rough. People who won the lottery shouldn't recommend it as an investment strategy to others. The way I thought about it at the time was that I was sacrificing money and freedom over where to live for the chance to do something I really enjoyed. I don't think I realized at the time that even getting *a* job was far from a guarantee. It's hard to know how I'd feel if, after 5 years for a PhD and a few years bouncing around as an adjunct, I went into a job that I could've done straight out of undergrad and felt like I was 8 or 10 years behind starting the rest of my life. Maybe I'd be grateful for the time that I'd had! I don't know.
3. What you say in that post strikes me as very sensible. Of course I think ranking is hard, and I'm not sure I think the things you rank in that post are all of the same type in a way that lets them be coherently ranked against each other. E.g., literacy/reading is certainly really important, but feels much less specific than the other methods. I can understand ranking natural experiments against RCTs--though even there, I think ranking is hard, because RCTs often sacrifice external validity for internal validity--but whichever one you use, you'll have to use literacy/reading to carry it out. Try analyzing data if you can't read! Also, there's enough diversity within each approach that I'm uneasy with the ranking. E.g., you put mathematical modeling above natural experiments. There are versions of mathematical modeling and versions of natural experiments where that strikes me as wrong; I tend to think the "credibility revolution" in economics was a positive development, and I think a reasonable way to characterize what was involved is that they elevated the status of natural experiments relative to mathematical modeling. Now maybe that's not the sort of mathematical modeling you had in mind. Fair enough! Maybe it's just hard to say stuff that's strictly true at this level of generality.
You call the four failed approaches "straw" versions of their approaches, and I agree, but I think a lot of the real action is going to be in identifying people that you think are straw monists, when they themselves would insist they're just differing from you on how much weight for this or that method is appropriate. For me at least, the "straw" character that I think is closest to real-life characters is the straw Bayesian. I think earlier versions of myself were probably closer to the straw Bayesian, and I think I see straw Bayesians on Substack. I can imagine the Straw Bayesian insisting that what you call alternatives to Bayesian approaches should really be subsumed into it--they all will amount to informing one's priors, or one's likelihoods. I can see the appeal of that way of thinking, but I tend to think it's probably not all that fruitful in the end. I've become more sympathetic to the modest, applied Bayesians (e.g., Andrew Gelman) who'd never think the kind of stuff they do can be extended to all of rational cognition.
"There are versions of mathematical modeling and versions of natural experiments where that strikes me as wrong; I tend to think the "credibility revolution" in economics was a positive development, and I think a reasonable way to characterize what was involved is that they elevated the status of natural experiments relative to mathematical modeling"
Agreed! I think of the tier list as a relative hierarchy, not dominance. It's analogous to characters/units in videogames (where the tier list idea was first popularized). A character that's usually very strong (S-tier) can usually be defeated by a really good player with a B-tier or C-tier character, and in specific situations you might even prefer the B-tier or C-tier character to the S-tier ones.
Similarly, really good natural experiments can be better than mathematical modeling in some situations, and also there might be entire fields (maybe economics is one, I'm not sure) where natural experiments overall outperform mathematical modeling in terms of information gained relative to baseline.
I also agree about the sloppy/imprecise abstraction stuff. I think the value of my framework or other ideas like it is trying to be more descriptively accurate about how most people (including highly principled, reasonable people) actually think. Even in the extreme limit (Like AGI in the year 3000) I expect super-reasoners to think in a multitude of super-approaches rather than try to fit everything in one perfect framework.
"I think earlier versions of myself were probably closer to the straw Bayesian, and I think I see straw Bayesians on Substack."
I feel the same way! I think Straw Bayesianism felt more appealing to me at 20, and when I first came across LessWrong.com.
"I can imagine the Straw Bayesian insisting that what you call alternatives to Bayesian approaches should really be subsumed into it--they all will amount to informing one's priors, or one's likelihoods."
Yeah this sounds right to me. I never got to the bottom of my practical disagreements with them; like in practice I don't think they make more Bayesian calculations than I do.
"I can see the appeal of that way of thinking, but I tend to think it's probably not all that fruitful in the end. I've become more sympathetic to the modest, applied Bayesians (e.g., Andrew Gelman) who'd never think the kind of stuff they do can be extended to all of rational cognition.""
For me I've always had some unease with straw Bayesianism but I think what really broke it for me was doing a ton of forecasting in 2020 and 2021. For an epistemic practice that felt as close to the ideal case of Bayesianism as possible (assigning numeric probabilities to one-off future outcomes without obvious past frequencies) It just didn't feel like additional studies into Bayesianism yielded a lot of insights compared to various other epistemic methods. So I agree with you re: "I can see the appeal of that way of thinking, but I tend to think it's probably not all that fruitful in the end."
But if you were to put a gun to my head and ask me to choose a preferred philosophy of statistics, I obviously wouldn't say "frequentist," and for practical communicative purposes I think it's better to answer "Bayesian with asterisks" than "some secret third thing."
I'm interested in cryobiology as a science - the cooling and vitrification of organic material down to ultra-low temperatures, to be revived an arbitrarily later date. Think embryos stored for decades before being implanted, sperm banks or on the less scientific side, cryonics.
My interest is mostly in the field of organ cryopreservation. Scientists have successfully vitrified and later rewarmed and implanted a rabbit kidney successfully, and I'm aware of at least two projects that are currently seeking to do the same with larger organs. A significant percentage of organs that are available to be donated end up rotting before they are implanted, so if you could preserve these organs indefinitely, it would save a lot of lives by increasing the supply, and allow for easier transplants on the patients, rather than the recently dead donors time.
The problem with this technology is that in order to vitrify something, you either need to cool it extremely quickly, or add a cryoprotectant, essentially chemicals that hinder the formation of ice. With small groups of cells like a day-old embryo or some sperm, you can cool it quickly enough without any (or a small amount of) cryoprotectant, which is good because the same mechanism that allows them to inhibit ice formation makes them inherently toxic to biology in useful concentrations.
This is a problem the larger the organ or organism you're trying to preserve - square cube law and all that - so it becomes increasingly difficult to vitrify something the great the volume to surface area ratio. I've read about 200 papers on this topic.
Okay I have a couple of non-substantive biographical questions and one substantitive one. Feel free to answer any or none of them!
1. Non-substantitive: What first got you interested in organ cryopreservation? Do you work on it actively/are you considering working on it, other than via reading papers? And have you written about your learnings anywhere (I can't find it on your substack)?
2. Technical: You mention cryoprotectants are 'inherently toxic to biology in useful concentrations.' Is this toxicity fundamentally unavoidable due to the same chemical properties that prevent ice formation, or is it more like a engineering problem where we just haven't found/designed the right molecules yet?
PS. You mentioned cryonics as the less scientific side of cryobiology. Have you read https://asteriskmag.com/issues/10/brain-freeze, which focused on information loss preservation rather than biological preservation? It seemed really interesting to me but I certainly lacked the expertise (or time) to evaluate further!
I'm one of the whackos who's pro-cryonics, despite its low probability of success. That's what first got me interested in learning how it was done, and when I realized how poor that actual preservation process is, I became more interested in near-term cryopreservation technologies as a transferred interest. I was thinking of possibly doing an organ preservation startup similar to what Until is doing now (https://www.untillabs.com/) in 2022, as I had just exited my first company and was looking for something new (I ended up just doing what I did at my first company again to a larger scale in the end).
I was also stonewalled from attending any academic conferences on cryopreservation. I knew if I was going to do it, I needed to find an experienced scientist to work with, ideally someone who had specific experience in small organ cryopreservation. This is an extremely small group of people though, so I'd settle for someone who simply had tangential experience. Apparently this is such a niche topic that you literally can't attend the few conferences on this or join the organization dedicated to researching cryopreservation without a relevant academic affiliation, which I don't have.
That sort of ruined my chances of doing this, but I'm still optimistic of going back to it in the future, especially if a company like Until fails. I think they are making some mistakes (or at least pursuing hype over effective progress to their stated goals), but they also just raised like $50 Million dollars, so I imagine they have the money for the hype, and I can't imagine it didn't help them raise the money.
I have not written about this anywhere publicly. I used to run a niche site categorizing all the different cryoprotectants (there are thousands and I had only done research into a few hundred), which is no longer active. I have all my notes and intend to write something about it publicly eventually, but I am more of a shape-rotator autist rather than a wordcell, meaning I struggle with motivation to write well.
It's approaching the problem from a different angle, and I admit it has its advantages. It basically completely removes the possibility of revival without a brain upload, which is fine as that's the preferred method of revival for many cryonicists anyway. It seems to do a better job at preserving the brain structure, and doesn't require the ultra-low temperatures that cryonics does (meaning less ongoing costs, meaning less risk of the company collapsing in the next 1-2 centuries). I imagine sufficiently advanced technology would be able to scan the structure and composition of every preserved neuron and reconstruct that into a living being, but speculating about that sort of technology is beyond my pay grade.
I haven't studied this way of going about it beyond that article, so I can't really evaluate how true their claims actually are. Their arguments are plausible, and the only reason Cryonics is done the way it is, is that there isn't anything better. I imagine if that method caught on first, we would be having the same conversation about cryopreservation without formaldehyde. IMO in the long term Cryonics using the current method is the path that can actually be the sort of "hibernation" that we like to imagine from sci-fi. We can't even cryopreserve a small rodent in this way though, so the methods to "preserve" humans have no chance of preserving you in a way that can later be revived without Clark-tech level technology.
The methods described in the article just bite the bullet and say "We're not going to try to use a method that if improved by an order of magnitude would work, because the current methodology falls an order of magnitude short of where we would need to be. Let's instead use an alternate method that we know isn't going after biological revival, but preserve the structure much better so the level of clark-tech necessary is slightly less than with current cryonics." I would not be surprised if currently this method is better, but in the very longer term, current cryonics has more potential.
There's also the practical problem of death, where you need a doctor to actually confirm you're dead, usually after your brain has been starved of oxygen for days or weeks already as you were naturally dying. If cryonics is to be more effective, it needs assisted suicide, which I am opposed to on other grounds, but conflicted here because it would be expedient for getting something I want.
2. This one is more complicated and difficult to answer than the first. We actually aren't completely sure what mechanism is responsible for cryoprotectant toxicity, but without fail, doses of single cryoprotectant at high enough concentration to limit ice formation to a significant extent, are quite toxic. I'm pulling from memory here, but the mechanism whereby cryoprotectants actually inhibit ice formation is by binding to water in the way water would bind to itself when forming ice, so instead of ice you get all the water in and around a cell bound to the cryoprotectant. Your cells need water to be alive though, so if this happens at room temperature your cells machinery will break very quickly.
This is the most plausible explanation as for why basically all useful doses of cryoprotectant are toxic, but many cryoprotectant chemicals have additional toxic effects that make them extremely toxic, and not suited for medical cryopreservation (Formaldehyde comes to mind as a chemical with cryoprotective effect that's so insanely toxic it isn't even considered a cryoprotectant, as any concentration is useless in preserving a living thing). The bread and butter of cryopreservation is DMSO, which can be tolerated at high concentrations, and has a high degree of cryoprotective effect.
BUT... it is also an engineering problem! The cryoprotective effect of any given cryoprotectant isn't linear, often having a sigmoid curve where at some concentration (sometimes quite low) the cryoprotective effect increases dramatically, then saturates. The toxic threshold, the point where a living thing can no longer tolerate a certain concentration of cryoprotectant is usually different from this curve, so if we choose our concentration very carefully, we can get some or most of the cryoprotective effect without (as much) toxicity! If we combine multiple effective cryoprotectants together, for reasons that are hard to model but are well known from physical tests, the toxic effect is not exactly cumulative. Thus, by creating a "cryoprotectant cocktail" you can get better cryoprotection at lower toxicities than you can with any individual cryoprotectant.
In my opinion this is the area of cryopreservation warranting the most study. There really hasn't been any new, more effective, cocktails in the past 20 years, and the best research I've read is from before this time. Most cocktails only use a few cryoprotectants, when there are at least hundreds (thousands counting unique proteins). If I was doing a startup on this the first focus wouldn't be with preserving mouse brain like Until, or preserving and rewarming small organs like the scientific community is doing, but on discovering better cocktails that would make literally every other aspect of the process easier.
It's also an engineering problem for other reasons I could go into a similar or deeper level of detail but won't for brevities sake. Toxicity is a function of time and temperature, so there's staggered systems of introducing higher concentrations (more toxic) of cryoprotectant as temperature cools. It becomes harder to perfuse the colder you get so this adds further difficulty. There's many ways you can operate faster or slower with the right engineering. Actually cooling large 3D organs isn't easy (square cube law), but the faster you can do it the lower the likelihood of ice formation (which is inversely related to the rate of temperature change), so figuring out novel ways to cool an organ faster is helpful.
Then there's the temperature gradient, where if the heat sync you are using to cool an organ (it could be simply the cold air around it) isn't uniformly distributed, you'll get a temperature gradient. During vitrification (a term I should have mentioned earlier, but is essentially cooling an organ without ice formation, where it becomes like glass. Vitrification is the goal of cryopreservation.) the volume of water decreases slightly, which if this doesn't happen uniformly, you'll get cracking (literally the breaking of the organ like a dropped ice cube). You can engineer around this by carefully bringing it as close to the vitrification temperature as possible, then pushing it over the line once the whole organ has reached just above the critical temperature.
And if you balance ALL that, only then are you faced with literally the same problem in reverse, because now you have to rewarm the organ! A rewarmed organ is still full of toxic cryoprotectant, and has the added pressure that the further you are in the process the more rushed you have to be, which is basically the opposite as during cooling (where you want to start cooling it as soon as possible to prevent cell death).
Fortunately rewarming is (basically) a solved problem. A team relatively recently (like 2021) added to the cryoprotectant solution small Iron Nanoparticles (IONPs) which, when put in what amounts to a specialty microwave, can be uniformly and quickly rewarmed. The rapid rewarming means no ice formation (which is an even larger problem during rewarming), no cracking (it can be rewarmed through the whole organ rather than just from where the cooling surface is), and the increased play from the faster and more uniform rewarming means that you can solve the other constraints much more easily.
Then it's as simple as transplanting the organ back to where it came from (or a new host) and waiting to see if it worked. A team in 2021 successfully did this with rabbit kidneys. The kidney is a hardy organ, but it's also very complex with lots of nooks and crannies, making it harder to perfuse (and hard to clean out the toxic cryoprotectant after rewarming) so this is a major accomplishment. I know of teams working on larger organs as we speak, and by now I wouldn't be surprised if some have already succeeded. It may be possible to do this with a small rodent within the next decade, but that introduces a host of new complications I won't get in to.
Open question to all readers: How do you balance parsimony vs nuance in your own thinking, in that of academic theory, and in various sides of academic and practical disputes?
I often see calls to both register as "applause lights"[1]. Where in some contexts people would only see arguments for nuance, or to "complexify the situation" or "the truth is complicated" etc but nobody gives an argument for simplicity. In other contexts (more implicit than explicit) people would say the truth is simple or give arguments for the virtues of parsimony. But don't acknowledge the costs of simplicity.
It's clear to me that this is a tradeoff, and many points along the tradeoff are defensible. But I also don't want this comment of "nuance about nuance" to be a Wise Saying or something you nod along to. It's a practical question: how do you balance parsimony vs nuance? What are guidelines you use? What are practical tradeoffs you've made in your academic or professional work, and/or daily life?
__
The best treatment in favor of parsimony I'm aware of is Kieran Healy's Fuck Nuance (2017) https://gwern.net/doc/philosophy/epistemology/2017-healy.pdf
I'm not aware of a specific treatment in favor of nuance but I see enjoinders to nuance all the time in the middle of other discussions. I'm also curious if readers have other sources they'd like to point to.
[1] https://www.lesswrong.com/posts/dLbkrPu5STNCBLRjr/applause-lights
I see this as 2 different situations.
When judging 2 conflicting theories, we prefer the simpler one.
but in trying to answer the total picture of a scenario, the truth is complicated.
Like, let's give an example. suppose we want to answer why pizza pies are circular.
Without doing any research suppose you are offered the following 3 explanations
1. circles are an easy shape to make
2. Circle starts with "c" which is the third letter of the alphabet. Pizza was invented by 3 brothers so they choose a shape that corresponds to the number 3
3. It's an easy shape to make and also was similar to other types of pies so the first pizza makers were already used to it and choose it for pizza. Also attempts to switch to other shapes were latter rejected because they had an easy way of dividing a circle into 8 equal slices which was harder to do for some other shapes.
1 is simple, and 2 and 3 are complicated. But 2 conflicts with 1 whereas 3 does not.
If I had to assign a probability to each I would assign 1 the highest probability (since it's simpler than 2 and since 3 implies 1, 1 must have a higher probability)
however, If I had to answer "which explanation is more likely closer to the complete story of why pizza is this shape" I would answer 3. Since I expect the complete story to be more complicated. Since it has to account for all the people involved and explain there motivations.
So it's a difference if your trying to answer "which of several exclusive theories is true" VS
"which of several non-exclusive theories is more likely to fully explain a phenomena"
Hey! Enjoyed talking with you live the other day, Linch. I’m a philosophy professor with interests in math, political science, and economics.
One thing I’ve been pondering lately is the loss of community in the US. (I know less about the problem in other countries, though would like to learn.) It seems to me like a major force driving many political and economic trends. But it’s also a bit slippery to measure “social capital.” And there are so many things one could mean by community. Friendship? Trust? Interpersonal knowledge? Shared norms? Etc.
So I find this all very new and exciting to think about.
Thanks! The pleasure was mutual. I hope we can speak again!
Here’s a lens I’m curious about.
1) Descriptively, to what extent do you think the loss of community was just a product of intentional choices? It seems like when people have the option for more vs less community, people mostly go for lower (and when people make the reverse choice, like converting to Mormonism, or, to a much lesser extent, practicing communal living, it’s notable).
2) Normatively, how much community is desirable, relative to the benefits of individualism and individualist culture? It’s good to be valued by others, but it’s also good that e.g. others don’t choice to have much of a say in my sexuality, who I worship, what clothes I wear etc.
See Sarah Constantin on both, but especially the normative aspect:
https://srconstantin.github.io/2017/06/26/in-defense-of-individualist-culture.html
As a vegan with a background in philosophy, I often find myself pondering animal ethics. Lately, I have been more specifically thinking about taxidermy. Is taxidermy unethical?
On the one hand, humans sometimes donate their bodies to science when they die, and funerals with an open casket are common. With humans, it does not seem as strange to preserve the body. In fact, it is part of a cultural practice of mourning loved ones.
On the other hand, those bodies donated to science were done so with the consent of the deceased. Animals cannot communicate with us in a way that gives consent; the inability for animals to consent is a part of why I am vegan, not vegetarian. Although, historically speaking, discomfort with human corpses seems relatively new. In the 19th century, post-mortem photography was a common way to express mourning and keep memories of the deceased.
(I have other thoughts about how this relates to questions of physicalism, materialism, and immaterialism, but those are harder to articulate and I need to think about them more.)
Many thanks // multas gratias tibi ago!
One question re: taxidermy (and other ways of preserving bodies, or actions with animal bodies in general) is: through taxidermy, *who* are we wronging? Is it the dead animals themselves? Currently living/future living animals who might be wrongfully killed? Some spiritual pact with nature? Ourselves? And is it appropriate to solely think of these questions in terms of the wronged?
(of course, dead humans cannot consent to be taxidermied or donated to science either, which seems relevant here).
Another question I have is, as the only student/scholar of ancient Greek and Roman philosophy I know, do you have a guess to what historical philosophers would say on this question, and why?
Hi Linch! I'm a philosopher.
Topics I've worked on include digital minds macrostrategy, phenomenal consciousness, varieties of fine-tuning, Boltzmann brains, and anthropics.
Something I've been thinking about lately: what form digital minds governance should take and how it might go wrong.
(Mini-abstracts for papers of mine are here: https://sites.google.com/a/brown.edu/brad-saad/papers-by-topic .)
1. Huh I didn't know Oxford still had philosophers working on questions like digital minds and anthropics! Feel free to not answer, but how did you survive the FHI and GPI cullings?
2. Re digital minds governance and how it might go wrong, what do you think is the value of thinking about these questions now vs later?
3. How did you come across my blog? :)
1. The university ultimately opted to honor my on contract.
2. I think the main advantage of later thinking is that we'll know more about the science of moral patiency indicators, what the types of AI systems will be prevalent, and the governance landscape.
On the other hand, later thinking may be too late to influence key decisions.
I expect AI governance to become less malleable over time. So, there's a risk that a window for influencing it will pass.
Achieving and diffusing strategic clarity about digital minds will take time. So, I think thinking that generates that clarity should happen well in advance of crunch time.
A lot of people have not yet formed views about digital minds. Earlier work on the topic may help people form more reasonable views. That could be intractable later if the topic becomes politicized.
I also think that not creating digital minds (at least in the near term) is a particularly promising approach that's a convergent interest of a diverse range of actors. But the availability of that approach may be fleeting. So thinking now about how to make that happen seems preferable to delaying.
3. I think I found your blog while was scrolling through blogs others subscribe to, happened to click on yours, and saw that you post on topics I'm interested in. (I don't remember whose subscription list I found yours on though.)
Hi Linch! I’m an attorney with a hobbyist’s interest in politics and the philosophical underpinnings of ideology. Much of the project of my blog is devoted to laying a philosophical groundwork for a set of future political writings that I have yet to put in publish-able shape.
The reverse AMA is a very cool idea, not least because I love any opportunity to talk about myself. Thrilled to answer any questions you might have.
Regarding your open question about tradeoffs between parsimony and nuance, I see this primarily as a question of resource allocation: a nuanced understanding of a topic is expensive (there is only so much time in the day), whereas a parsimonious explanation that is right 90% of the time is perfectly adequate except in high-stakes scenarios. I think a person should strive for a nuanced understanding of theories and explanations that pertain to their day-to-day concerns (what they do for a living, how they spend their free time, who they spend it with), but I’m not especially sympathetic to calls for nuance as a generalized principle, as, in any area of inquiry, there is always more nuance to be had: I think anyone appealing to nuance should generally do so by pointing to a specific nuance relevant to some point, and they owe some explanation of the risks associated with overlooking it.
Hi Gumphus! I read your post https://substack.com/@gumphus/p-169764193 with great interest.
Here's a question: You say "don't live as a utilitarian" but really the article seems more about systematized morality in general. You examples of people who are in your analogy "moral scientists" probing the bounds of morality and "moral astronauts" who happen to work in exigencies, and say that non-commonsensical morality does not apply to the rest of us. How would you respond to people who say it does? For example, morality sure seems like it makes demands of my diet, donation choices, and voting behavior! I'm also curious if you have a response to https://forum.effectivealtruism.org/posts/Dtr8aHqCQSDhyueFZ/the-possibility-of-an-ongoing-moral-catastrophe-summary , which argues, forcibly in my book, that we are likely in the middle of one or more moral catastrophes.
I think it’s absolutely a live possibility that we are in the midst of one or more moral catastrophes. Probably several. Arguments to this effect, made by “moral scientists,” should ground the formation of new moral principles that can then be interpolated into our culture’s commonsense social morality. These people should be arguing for new principles, on the grounds that our society’s current principles are inadequate; this is precisely the sort of work that our moral scientists should be doing.
My view is that developing these sorts of arguments and figuring out how they would cash out, as principles, on the day-to-day terms of real-world society is a specialized task; it is a niche that our moral scientists occupy. It is hard to do; it is at least as hard to make a good moral principle, as it is to make a good shoe. I think morality makes strong demands on our diet, donation choices, and voting, and the people who are figuring out what norms society should have in these areas are best suited by doing so on utilitarian terms.
Where I think utilitarianism is less useful is on the mundane, terrestrial level, further from the moral “frontier”: it’s not obviously productive to conduct a full-blown analysis of whether one should rob a bank, or have kids as a teenager, or cheat on their spouse, every time one considers whether to do so, since the “rules of thumb” we employ in society provide simple & effective answers to these questions. And there is only so much time in the day, and these analyses are hard to do well if we are doing them “from scratch.”
Hey Linch, big fan of the blog!
Some random me-facts:
I'm a lapsed anthropologist. I did a very Chinese Anthropology/Ethnology MA in Guizhou with lots of fieldwork expeditions (mainly getting drunk with village cadres) around the province and in Laos, mostly with Hmong/Miao, and Buyi minorities.
Over the past 4 years I've jumped between very typical EA fields (animal welfare/alt proteins; GH&D grantmaking; AI Governance; community building) in a very "jack-of-all-trades" fashion.
I've been thinking about these things (my blogpost drafts I'm stuck on):
1) Are "circuit breakers" for factory farming in poor countries possible?
2) Star Citizen is the Weirdest Economic Phenomenon Ever
3) If There Is A God, Sadly He Is Evil
4) Why are we not really getting happier?
Also have a 8-month-old baby, and moderately high p(doom) (20% ish), so thinking a lot about child-rearing in the end times lately.
I. Are you ethnically Chinese? Did you have to learn non-Putonghua Mandarin, or other non-Mandarin languages for your anthropology work?
II. I'm interested in 1) and 2)! Especially 1). Some more animal welfare stuff on this corner of substack could be fun!
III. 3) seems really obvious to me unless you posit a very large multiverse (https://slatestarcodex.com/2015/03/15/answer-to-job/) so probably not worth getting into, I doubt you'd persuade anybody who doesn't already believe it.
IV. Is 4) true? It seems like richer countries are happier than poorer countries.
I. I'm not ethnically Chinese. I learned Putonghua in my undergrad. I learned (White) Hmong (which my supervisor also spoke) in a more structured way using old French dictionaries, and a local teacher (both in Laos and China), and got pretty good. I had to learn Guizhou (mainly Qiandongnan) dialect to communicate with older people who couldn't speak Mandarin, but I never learned it systematically, and still sound pretty amateurish when I speak.
II. Thanks! I'll try to write 1). I'll try to use your interest as an incentive to finish it.
III. Yeah, 3) was mainly being "nerd-sniped" by Benthams Bulldog's re-popularising of the anthropic argument. I'll use your disinterest as a disincentive to finish it.
IV: Somewhat controversially, I am using the "we" to refer to the global top 20% or so. I agree that richer countries are happier than poorer countries (importance of treating malnutrition, and pain relief etc. on well-being is incredible). But, once we get rich, increased wealth and technological progress seem to have almost no tangible impact on happiness at a population level (the debate is more about whether it's 1% improvement per decade or a flatline). So the question we should be asking is: "Why are we so crappy at converting wealth and technology into happiness?" I think there are some obvious answers, some interesting/unexpected ones.
Thanks! I'm curious, as a non-native Chinese speaker (presumably a native English speaker), was learning Hmong easier after learning Standard Mandarin, or was it similarly/more difficult?
I appreciate II. and III. Of course, I'm just one person (and giving you advice in the context of "cheap talk") so don't take my advice too seriously!
Re IV, happy to restrict our conversation to the global top 20% or so! I still think it's very plausible people have gotten a bunch happier but it's hard to measure, but regardless, agree that we can do so much more!
"So the question we should be asking is: "Why are we so crappy at converting wealth and technology into happiness?" I think there are some obvious answers, some interesting/unexpected ones."
This does sound interesting! You might enjoy, on a different but related topic. https://linch.substack.com/p/the-rising-premium-for-life
Hmong was significantly easier after having learned Mandarin. Sentence structure and grammar is very similar. Although it's a different language family, there are some Chinese loan words which help you get started: e.g. tabsis from 但是 (But); yaj ywm from 洋芋 (Potato), niam from 娘 (Mum) (the final consonant is the tone marker).
Learning one tonal language also makes other tonal languages way easier. Hmong (especially as spoken in Southeast Asia) is actually a more "pure" tonal language than Mandarin, more like a South-East Chinese dialect or Vietnamese. So, unlike Mandarin, you learn the tone structure in a textbook and normal people actually use the tones in this way, which is refreshing! It's also purely syllable-timed, so words are very distinct and easy to separate. And, of course, everything is romanised in learning materials. So it was an easier jump from Mandarin to Hmong than (I imagine) the other way around!
I enjoyed the "Rising Premium for Life" post. One of your best!
A reverse AMA is a cool idea!
I'm a philosophy professor who likes to think he would've been an economist, but realistically would've been a lawyer, if I hadn't gone into philosophy.
I'm also married with 4 kids--10, 8, just under 3, and just under 1.
3 questions (feel free to answer all or none of them!)
1. How has having children influenced your work? Has having multiple children influenced any of your academic insights on the nature of knowledge formation, how knowledge defines justification, or any other of your epistemological insights or commitments?
2. You seem to be a very successful philosopher! Are you happy with this life? Relatedly, I have friends considering philosophy graduate school. Do you have thoughts on what considerations they might be under-emphasizing, or other sage advice? (Feel free to drop a link if you've already written or talked about it elsewhere! And again, feel free to not say anything)
3a. The epistemological literature is very vast, so apologies if I'm just very ignorant here. But do you think the ontology I laid out here for epistemological frameworks (https://linch.substack.com/i/170869049/four-failed-approaches) is basically correct? In brief, almost all views of epistemological frameworks for ways of knowing fall under monism (One True Way), pluralism (all ways are good), or nihilism (either truth is not real, or we can't have access to it)?
3b. And if my read of the literature is correct, then I have a pretty obvious follow-up question on why professional epistemologists don't like to develop hierarchal views like mine! Even if they disagree vastly on where things are in the hierarchy; I think my views represent "normal" intuitions for truth-seeking much better)
1. I don't think having children has influenced my academic work, except to slow it down. Life has tradeoffs! On my deathbed I doubt I'll regret having fewer publications or a smaller H-index than I might otherwise have had, but I do think I'd miss grandkids. That said, part of what I like about Substack is the opportunity to stretch out a bit. While I don't really see how to work thoughts about parenting into my professional work, I have talked about it a bit in Substack posts.
2. I am very happy with the academic life, but I tend to discourage others from pursuing it, for all the familiar reasons. The job market is very rough. People who won the lottery shouldn't recommend it as an investment strategy to others. The way I thought about it at the time was that I was sacrificing money and freedom over where to live for the chance to do something I really enjoyed. I don't think I realized at the time that even getting *a* job was far from a guarantee. It's hard to know how I'd feel if, after 5 years for a PhD and a few years bouncing around as an adjunct, I went into a job that I could've done straight out of undergrad and felt like I was 8 or 10 years behind starting the rest of my life. Maybe I'd be grateful for the time that I'd had! I don't know.
3. What you say in that post strikes me as very sensible. Of course I think ranking is hard, and I'm not sure I think the things you rank in that post are all of the same type in a way that lets them be coherently ranked against each other. E.g., literacy/reading is certainly really important, but feels much less specific than the other methods. I can understand ranking natural experiments against RCTs--though even there, I think ranking is hard, because RCTs often sacrifice external validity for internal validity--but whichever one you use, you'll have to use literacy/reading to carry it out. Try analyzing data if you can't read! Also, there's enough diversity within each approach that I'm uneasy with the ranking. E.g., you put mathematical modeling above natural experiments. There are versions of mathematical modeling and versions of natural experiments where that strikes me as wrong; I tend to think the "credibility revolution" in economics was a positive development, and I think a reasonable way to characterize what was involved is that they elevated the status of natural experiments relative to mathematical modeling. Now maybe that's not the sort of mathematical modeling you had in mind. Fair enough! Maybe it's just hard to say stuff that's strictly true at this level of generality.
You call the four failed approaches "straw" versions of their approaches, and I agree, but I think a lot of the real action is going to be in identifying people that you think are straw monists, when they themselves would insist they're just differing from you on how much weight for this or that method is appropriate. For me at least, the "straw" character that I think is closest to real-life characters is the straw Bayesian. I think earlier versions of myself were probably closer to the straw Bayesian, and I think I see straw Bayesians on Substack. I can imagine the Straw Bayesian insisting that what you call alternatives to Bayesian approaches should really be subsumed into it--they all will amount to informing one's priors, or one's likelihoods. I can see the appeal of that way of thinking, but I tend to think it's probably not all that fruitful in the end. I've become more sympathetic to the modest, applied Bayesians (e.g., Andrew Gelman) who'd never think the kind of stuff they do can be extended to all of rational cognition.
"There are versions of mathematical modeling and versions of natural experiments where that strikes me as wrong; I tend to think the "credibility revolution" in economics was a positive development, and I think a reasonable way to characterize what was involved is that they elevated the status of natural experiments relative to mathematical modeling"
Agreed! I think of the tier list as a relative hierarchy, not dominance. It's analogous to characters/units in videogames (where the tier list idea was first popularized). A character that's usually very strong (S-tier) can usually be defeated by a really good player with a B-tier or C-tier character, and in specific situations you might even prefer the B-tier or C-tier character to the S-tier ones.
Similarly, really good natural experiments can be better than mathematical modeling in some situations, and also there might be entire fields (maybe economics is one, I'm not sure) where natural experiments overall outperform mathematical modeling in terms of information gained relative to baseline.
I also agree about the sloppy/imprecise abstraction stuff. I think the value of my framework or other ideas like it is trying to be more descriptively accurate about how most people (including highly principled, reasonable people) actually think. Even in the extreme limit (Like AGI in the year 3000) I expect super-reasoners to think in a multitude of super-approaches rather than try to fit everything in one perfect framework.
"I think earlier versions of myself were probably closer to the straw Bayesian, and I think I see straw Bayesians on Substack."
I feel the same way! I think Straw Bayesianism felt more appealing to me at 20, and when I first came across LessWrong.com.
"I can imagine the Straw Bayesian insisting that what you call alternatives to Bayesian approaches should really be subsumed into it--they all will amount to informing one's priors, or one's likelihoods."
Yeah this sounds right to me. I never got to the bottom of my practical disagreements with them; like in practice I don't think they make more Bayesian calculations than I do.
"I can see the appeal of that way of thinking, but I tend to think it's probably not all that fruitful in the end. I've become more sympathetic to the modest, applied Bayesians (e.g., Andrew Gelman) who'd never think the kind of stuff they do can be extended to all of rational cognition.""
For me I've always had some unease with straw Bayesianism but I think what really broke it for me was doing a ton of forecasting in 2020 and 2021. For an epistemic practice that felt as close to the ideal case of Bayesianism as possible (assigning numeric probabilities to one-off future outcomes without obvious past frequencies) It just didn't feel like additional studies into Bayesianism yielded a lot of insights compared to various other epistemic methods. So I agree with you re: "I can see the appeal of that way of thinking, but I tend to think it's probably not all that fruitful in the end."
But if you were to put a gun to my head and ask me to choose a preferred philosophy of statistics, I obviously wouldn't say "frequentist," and for practical communicative purposes I think it's better to answer "Bayesian with asterisks" than "some secret third thing."
I'm interested in cryobiology as a science - the cooling and vitrification of organic material down to ultra-low temperatures, to be revived an arbitrarily later date. Think embryos stored for decades before being implanted, sperm banks or on the less scientific side, cryonics.
My interest is mostly in the field of organ cryopreservation. Scientists have successfully vitrified and later rewarmed and implanted a rabbit kidney successfully, and I'm aware of at least two projects that are currently seeking to do the same with larger organs. A significant percentage of organs that are available to be donated end up rotting before they are implanted, so if you could preserve these organs indefinitely, it would save a lot of lives by increasing the supply, and allow for easier transplants on the patients, rather than the recently dead donors time.
The problem with this technology is that in order to vitrify something, you either need to cool it extremely quickly, or add a cryoprotectant, essentially chemicals that hinder the formation of ice. With small groups of cells like a day-old embryo or some sperm, you can cool it quickly enough without any (or a small amount of) cryoprotectant, which is good because the same mechanism that allows them to inhibit ice formation makes them inherently toxic to biology in useful concentrations.
This is a problem the larger the organ or organism you're trying to preserve - square cube law and all that - so it becomes increasingly difficult to vitrify something the great the volume to surface area ratio. I've read about 200 papers on this topic.
Okay I have a couple of non-substantive biographical questions and one substantitive one. Feel free to answer any or none of them!
1. Non-substantitive: What first got you interested in organ cryopreservation? Do you work on it actively/are you considering working on it, other than via reading papers? And have you written about your learnings anywhere (I can't find it on your substack)?
2. Technical: You mention cryoprotectants are 'inherently toxic to biology in useful concentrations.' Is this toxicity fundamentally unavoidable due to the same chemical properties that prevent ice formation, or is it more like a engineering problem where we just haven't found/designed the right molecules yet?
PS. You mentioned cryonics as the less scientific side of cryobiology. Have you read https://asteriskmag.com/issues/10/brain-freeze, which focused on information loss preservation rather than biological preservation? It seemed really interesting to me but I certainly lacked the expertise (or time) to evaluate further!
1.
I'm one of the whackos who's pro-cryonics, despite its low probability of success. That's what first got me interested in learning how it was done, and when I realized how poor that actual preservation process is, I became more interested in near-term cryopreservation technologies as a transferred interest. I was thinking of possibly doing an organ preservation startup similar to what Until is doing now (https://www.untillabs.com/) in 2022, as I had just exited my first company and was looking for something new (I ended up just doing what I did at my first company again to a larger scale in the end).
I was also stonewalled from attending any academic conferences on cryopreservation. I knew if I was going to do it, I needed to find an experienced scientist to work with, ideally someone who had specific experience in small organ cryopreservation. This is an extremely small group of people though, so I'd settle for someone who simply had tangential experience. Apparently this is such a niche topic that you literally can't attend the few conferences on this or join the organization dedicated to researching cryopreservation without a relevant academic affiliation, which I don't have.
That sort of ruined my chances of doing this, but I'm still optimistic of going back to it in the future, especially if a company like Until fails. I think they are making some mistakes (or at least pursuing hype over effective progress to their stated goals), but they also just raised like $50 Million dollars, so I imagine they have the money for the hype, and I can't imagine it didn't help them raise the money.
I have not written about this anywhere publicly. I used to run a niche site categorizing all the different cryoprotectants (there are thousands and I had only done research into a few hundred), which is no longer active. I have all my notes and intend to write something about it publicly eventually, but I am more of a shape-rotator autist rather than a wordcell, meaning I struggle with motivation to write well.
P.S.
You can see my comments on that article here: https://www.reddit.com/r/slatestarcodex/comments/1ldj78o/comment/mybmi20/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
It's approaching the problem from a different angle, and I admit it has its advantages. It basically completely removes the possibility of revival without a brain upload, which is fine as that's the preferred method of revival for many cryonicists anyway. It seems to do a better job at preserving the brain structure, and doesn't require the ultra-low temperatures that cryonics does (meaning less ongoing costs, meaning less risk of the company collapsing in the next 1-2 centuries). I imagine sufficiently advanced technology would be able to scan the structure and composition of every preserved neuron and reconstruct that into a living being, but speculating about that sort of technology is beyond my pay grade.
I haven't studied this way of going about it beyond that article, so I can't really evaluate how true their claims actually are. Their arguments are plausible, and the only reason Cryonics is done the way it is, is that there isn't anything better. I imagine if that method caught on first, we would be having the same conversation about cryopreservation without formaldehyde. IMO in the long term Cryonics using the current method is the path that can actually be the sort of "hibernation" that we like to imagine from sci-fi. We can't even cryopreserve a small rodent in this way though, so the methods to "preserve" humans have no chance of preserving you in a way that can later be revived without Clark-tech level technology.
The methods described in the article just bite the bullet and say "We're not going to try to use a method that if improved by an order of magnitude would work, because the current methodology falls an order of magnitude short of where we would need to be. Let's instead use an alternate method that we know isn't going after biological revival, but preserve the structure much better so the level of clark-tech necessary is slightly less than with current cryonics." I would not be surprised if currently this method is better, but in the very longer term, current cryonics has more potential.
There's also the practical problem of death, where you need a doctor to actually confirm you're dead, usually after your brain has been starved of oxygen for days or weeks already as you were naturally dying. If cryonics is to be more effective, it needs assisted suicide, which I am opposed to on other grounds, but conflicted here because it would be expedient for getting something I want.
2. This one is more complicated and difficult to answer than the first. We actually aren't completely sure what mechanism is responsible for cryoprotectant toxicity, but without fail, doses of single cryoprotectant at high enough concentration to limit ice formation to a significant extent, are quite toxic. I'm pulling from memory here, but the mechanism whereby cryoprotectants actually inhibit ice formation is by binding to water in the way water would bind to itself when forming ice, so instead of ice you get all the water in and around a cell bound to the cryoprotectant. Your cells need water to be alive though, so if this happens at room temperature your cells machinery will break very quickly.
This is the most plausible explanation as for why basically all useful doses of cryoprotectant are toxic, but many cryoprotectant chemicals have additional toxic effects that make them extremely toxic, and not suited for medical cryopreservation (Formaldehyde comes to mind as a chemical with cryoprotective effect that's so insanely toxic it isn't even considered a cryoprotectant, as any concentration is useless in preserving a living thing). The bread and butter of cryopreservation is DMSO, which can be tolerated at high concentrations, and has a high degree of cryoprotective effect.
BUT... it is also an engineering problem! The cryoprotective effect of any given cryoprotectant isn't linear, often having a sigmoid curve where at some concentration (sometimes quite low) the cryoprotective effect increases dramatically, then saturates. The toxic threshold, the point where a living thing can no longer tolerate a certain concentration of cryoprotectant is usually different from this curve, so if we choose our concentration very carefully, we can get some or most of the cryoprotective effect without (as much) toxicity! If we combine multiple effective cryoprotectants together, for reasons that are hard to model but are well known from physical tests, the toxic effect is not exactly cumulative. Thus, by creating a "cryoprotectant cocktail" you can get better cryoprotection at lower toxicities than you can with any individual cryoprotectant.
In my opinion this is the area of cryopreservation warranting the most study. There really hasn't been any new, more effective, cocktails in the past 20 years, and the best research I've read is from before this time. Most cocktails only use a few cryoprotectants, when there are at least hundreds (thousands counting unique proteins). If I was doing a startup on this the first focus wouldn't be with preserving mouse brain like Until, or preserving and rewarming small organs like the scientific community is doing, but on discovering better cocktails that would make literally every other aspect of the process easier.
It's also an engineering problem for other reasons I could go into a similar or deeper level of detail but won't for brevities sake. Toxicity is a function of time and temperature, so there's staggered systems of introducing higher concentrations (more toxic) of cryoprotectant as temperature cools. It becomes harder to perfuse the colder you get so this adds further difficulty. There's many ways you can operate faster or slower with the right engineering. Actually cooling large 3D organs isn't easy (square cube law), but the faster you can do it the lower the likelihood of ice formation (which is inversely related to the rate of temperature change), so figuring out novel ways to cool an organ faster is helpful.
Then there's the temperature gradient, where if the heat sync you are using to cool an organ (it could be simply the cold air around it) isn't uniformly distributed, you'll get a temperature gradient. During vitrification (a term I should have mentioned earlier, but is essentially cooling an organ without ice formation, where it becomes like glass. Vitrification is the goal of cryopreservation.) the volume of water decreases slightly, which if this doesn't happen uniformly, you'll get cracking (literally the breaking of the organ like a dropped ice cube). You can engineer around this by carefully bringing it as close to the vitrification temperature as possible, then pushing it over the line once the whole organ has reached just above the critical temperature.
And if you balance ALL that, only then are you faced with literally the same problem in reverse, because now you have to rewarm the organ! A rewarmed organ is still full of toxic cryoprotectant, and has the added pressure that the further you are in the process the more rushed you have to be, which is basically the opposite as during cooling (where you want to start cooling it as soon as possible to prevent cell death).
Fortunately rewarming is (basically) a solved problem. A team relatively recently (like 2021) added to the cryoprotectant solution small Iron Nanoparticles (IONPs) which, when put in what amounts to a specialty microwave, can be uniformly and quickly rewarmed. The rapid rewarming means no ice formation (which is an even larger problem during rewarming), no cracking (it can be rewarmed through the whole organ rather than just from where the cooling surface is), and the increased play from the faster and more uniform rewarming means that you can solve the other constraints much more easily.
Then it's as simple as transplanting the organ back to where it came from (or a new host) and waiting to see if it worked. A team in 2021 successfully did this with rabbit kidneys. The kidney is a hardy organ, but it's also very complex with lots of nooks and crannies, making it harder to perfuse (and hard to clean out the toxic cryoprotectant after rewarming) so this is a major accomplishment. I know of teams working on larger organs as we speak, and by now I wouldn't be surprised if some have already succeeded. It may be possible to do this with a small rodent within the next decade, but that introduces a host of new complications I won't get in to.
Thanks, really appreciate all of your detailed answers! Gives me much to think about.
Hi. I'm Shlomo.
Background: I am an American Orthodox Jew who moved to Israel as an adult. I have a wife and 2 kids
Intellectual Interests: I suppose philosophy, I took a bunch of philosophy classes in school.
Also, as per the background above: Judaism, parenting, etc
Expertise: Not unusual, but I am a computer programer
Other Hobbies: Chess, BrandonSanderson