Explaining a joke is like dissecting a frog. It feels gross and vaguely sacrilegious to do it, yet somehow I feel compelled to do so anyway.
A year ago, I produced what I consider to be my best April Fools’ joke: Open Asteroid Impact. (Before reading further you should probably just go to the site! Even if you’ve already seen the site before, it might be helpful to open the site on another tab to be able to track what I’m talking about).
OAI’s the fictional launch site of a startup dedicated to intentionally lobbing asteroids at Earth for mineral rights. The startup’s purported mission, naturally, is on asteroid impact “safety,” arguing that if they(we) don’t accelerate as quickly as possible, other, more dangerous, competitors would do so before us.
Of course, the website’s a satire of Open AI. I focus on corporate safety approaches in general and the whole absurd situation we found ourselves in regarding AI safety. But I naturally also satirized OpenAI’s competitors, much of the existing AI safety and AI policy discourse, startup lingo and over-optimism in general, and the general pattern of poor logic or motivated reasoning masked with scientific-sounding words.
The site was fairly popular for a niche microsite centered on rather arcane and niche jokes. We forgot to turn on site tracking, but evidence for its popularity in a niche AI-focused audience includes (800+ likes and 100k+ views on my Twitter launch post, Yudkowsky calling it “the best I've ever seen of whatever this is”, random popularity in niche subreddits, etc). Unfortunately, it didn’t reach the breakout fame I was vaguely hoping for in my 95th-99th percentile worlds. Still, overall I’m proud of it.
Many people have told me that it’s their favorite April Fools’ joke. Quite a few of my friends liked it. A friend at the Lighthaven community center does the bit of “OMG it’s the CEO of OAI!” bit every time I walk into a room he’s in. A researcher at Anthropic told me it’s the only April Fools’ joke he’s ever liked, which I can’t really tell if it was meant as a genuine compliment or a subtly barbed insult.
The primary audience for the article was AI and AI safety folks. However, once in a while I still get fan mail from people in places I don’t expect, like from a manager at the European Space Agency.
I managed to finish the website surprisingly quickly (almost all the core writing was done in ~ half a day or so). Everything flowed from my hands like water. Nonetheless a lot of care was put into the wording. Pretty much every word, sentence, or arrangement of motifs was carefully crafted with a coherent narrative in mind. But even people who really liked the site often miss a lot of the details, so today (on my birthday) I’ll be extra self-indulgent and explain (almost) every Easter egg and reference you missed in Open Asteroid Impact.
First Page
Opening quote: Astute readers might notice that Hillary Clinton was not the first person to come up with the phrase “That which does not kill us makes us stronger.” Instead, in her 2017 book, she quotes Kelly Clarkson who referenced the quote from Nietzsche. (Correctly attributing quotes in a misleading way is one of my favorite joke genres)
Logo: Direct reference to the most iconic scene in Dr. Strangelove. Created with the help of ChatGPT and a human designer (Carolina Oliveira). Funnily enough, Claude refused to help me work on the site initially.
Mission statement: Sam Altman, Open AI’s CEO, once said: “you can grind to help secure our collective future or you can write substacks about why we are going fail,” which I thought was a ridiculous response to people telling him that it’s evil to build doomsday machines. But outrage doesn’t persuade anyone. It’s unclear if highlighting the ridiculous logic with satire persuades anyone either. Still, I tried.
Intro
Sets up three major running themes for the rest of the site:
High-minded corporate language about “benefiting humanity” with no concrete plans to actually do so. This is just like most AI companies, or, indeed, most of Silicon Valley in general.
Brushing aside any safety concerns
Sets up a false dichotomy of being in a race against “less responsible actors” without good (or indeed, any) justification for why we’re actually safer. This has indeed been the (dubious) justification for the founding of OpenAI, Anthropic, Deepmind and other AI companies since the very beginning.
Open Letter wording: Play on the CAIS Statement wording.
FAQ
Open: “That is why we no longer open source our software and models. Instead, we rent out our machines to whoever is willing to pay us enough money.
For safety.” Satirizes the self-serving nature of AI companies for using safety justifications in ways that benefit them financially, and indeed take actions that don’t benefit safety at all.
(Note: I’m quite uncertain whether sharing open-weight LLMs net benefit or net-harm AI safety; and as of April 2024 I was moderately against open-sourcing. For arguments in favor of open weight models being net positive for safety, see this podcast with Beth Barnes, for arguments against, see ).
“Indeed, were someone to redirect an asteroid badly it might cause massive damage - something doubtless many terrorist groups are already aware of” also satirizes the nature of self-fulfilling prophecies in AI risk communication.
The last paragraph is a nod to corporate gaslighting, a behavior which many companies engage in but Open AI/Sam Altman is unusually prolific at.
What about the rocket alignment problem? Reference to a 2017-2018 MIRI/Arbital post analogizing AI alignment to rocket alignment. I didn’t personally find the analogy as they use it persuasive, but stole from it anyway.
““Precision microtargeting” in landing on roughly the right continent” emphasizes that AI alignment, as most researchers and practitioners understand it, isn’t primarily about solving difficult and subtle problems that humans face in moral philosophy. Instead, it's a series of problems that (many of us believe), if unsolved, can result in substantial and unambiguous misery and death for billions of people. This is to counteract intuitions/framings of AI alignment as a problem with a tiny chance of a huge payoff, or about very subtle issues in moral philosophy.
“Also, empirically no human-redirected asteroids have ever killed anyone.” emphasizes that the worry is about the (near) future rather than current-generation AI models (or current generation asteroid redirection).
About us
Jesus Quote: It’s a real quote (ish)! Matthew 16-18.
“And there are more atoms in a single molecule of water than there are stars in the entire solar system.” Because 3 > 1.
Corporate Structure
References OpenAI’s highly unusual corporate structure.

The unusual structure was used to great effect to silence dissent within OpenAI, and prevent ex-employees from saying bad things about the company without losing their compensation.
The punchline (on Open Asteroid Impact’s site, not the other company’s) “to avoid problems with Arrow’s Impossibility Theorem” is a reference to the complicated corporate structure being necessary to ensure the company is a dictatorship. Jokes on social choice economics are a hoot!
Our Team
Austin Chen was a lifesaver in doing all the work in building and maintaining the website so I could focus on refining the wording and jokes. In addition to the technical side he also made a bunch of design choices that made the website and jokes flow better (e.g. making the quotes prominent, adding portraits, etc)
Zach Weinersmith was super-supportive of the project but didn’t have time to contribute much other than add star power and retweet a few things.
Our Relationships
The bush quote is real:
Startups
Referencing DeepMind(DeepMine) and Anthropic (Anthropocene) of course.
DeepMine’s CEO Dennis (Demis) super-sketchy: I feel like I’m taking crazy pills sometimes. It’s publicly known that before Demis Hassabis, the founder of Google DeepMind, got into neuroscience AI, he was a game developer. He created exactly three games: Black and White, a “god simulator”, Republic: The Revolution, a game about surreptitiously overthrowing an Eastern European country, and Evil Genius, a “tongue-in-cheek world domination simulator”. Yet nobody other than me ever brings it up.
Real Life isn’t a movie, and doesn’t have to follow genre tropes. But if it is, the audience would be groaning at us for missing clues this obvious.
The Anthropocene (Anthropic) jokes aren’t really funny imo. I put them there for “fair play” but they’re kinda meh.
Nation-State Actors
References the ridiculousness of caring about which country builds the doomsday machines that kill you, instead of anybody building it in the first place. Now this isn’t the final word on interstate dynamics. I think there are legitimate complicating factors; you may indeed prefer some countries developing AIs than others, and even from the perspective of accident risks, some countries may indeed be safer than others. The main thing it’s satirizing is the simplicity of the argument “Yay America!” -> “Yay American death robots!” The argument might well check out, but you need more steps.
Furthermore, “race narratives” have historically been exaggerated and overstated, see Belfield for a treatment on this subject.
A/accs and decels
Makes fun of e/accs (effective accelerationists who believe in speeding up AI development at all costs). Also makes fun of false middle ground positions, and how people making lip noises about the cause of AI safety may not actually be doing anything on safety (the most recent example that comes to mind is Ilya Susketever’s so-called “Safe Superintelligence LLC”, a company valued at 30 billion+ that has not, to the best of my knowledge, made any advances in AI safety).
Independent Safety Evaluators
I wasn’t feeling inspired for this section so asked Matthew Barnett to write it. After April Fools’ Day was over, he also wrote a critique of the Open Asteroid Impact satire and argued for why the analogy doesn’t hold, which I appreciate.
Near-Term Economic Impacts
When Krishna said “I am become Death, the shatterer of worlds,” I believe he had in mind the effect on jobs.
A double-reference to two of my favorite clips on existential risk, both terrifying (for different reasons). First, there’s Oppenheimer (sometimes called the Father of the Atomic Bomb)’s most famous quote:
J. Robert Oppenheimer: "I am become Death, the destroyer of worlds."
Then there’s this Senate line:
Windfall clause: Reference to OpenAI’s windfall clause. There’s no serious political point to this joke, I just thought it was funny.
Retraining: “Teach truckers to code” feels like it’s been a meme that’s been going on for a decade+. Obviously inadequate for anybody who thinks about it for two seconds, and yet people keep offering it as a solution.
Our Safety Measures
Design Principles: Bigger, Faster, Safer
This section was almost a word-for-word direct lift of Anthropic’s Claude 3 announcement with the signature 1984-esque line “Smarter, Faster, Safer.” Unfortunately, approximately zero people reading the website noticed that joke so it just sounded like a less funny section of the overall site.
Responsible Slinging Policy
The threat levels classification was a combination of the Anthropic Responsible Scaling Policies and the One Punch Man anime’s disaster levels. I like the joke but if I were to redo everything I’d probably replace OPM with more topical references.
Operation Death Star
Intended as a reductio ad absurdum of people saying that building up AI capabilities is net positive because it can help wake people up to the threat of building up AI capabilities. D* also reference to Q*.
Other
“We are doing everything we can to make the world safer from human redirected asteroids, including awareness raising. This is why we work closely with regulators to ban more dangerous research and development, while creating exemptions for projects that are differentially safe (e.g. large scaling projects).”
Another reference to regulatory capture. In general, OpenAI, Anthropic, Google and others have been known to publicly talk a big game about AI policy and wanting to be regulated while working very hard to prevent AI regulations that might actually negatively affect them in any way.
Open Letter
I think a cool thing about the open letter is that all the “verified names” are real people who have actually signed the letter that we verified. I’m glad to have gotten microcelebrities like Anders Sandberg, Zach Weinersmith (from SMBC Comics) and Nate Soares (President of MIRI) on board. It was also cool to get ~100 other real people, including a lot of PhDs in relevant subjects.
You too can sign the letter, if you are so inclined.
___________________________________
Background: Why Asteroids?
Why did I focus on human-redirected asteroids? Well, it all started 30 minutes before a social…
I hate “what do you do for work” as a question at parties. It’s rarely useful and almost never fun. In the spring of 2024 this was especially annoying, because I worked as a ~full-time grantmaker at a foundation for AI safety and biosecurity, and there was a real chance I’d bump into people who were or wanted to be funded by us. Not a fun thing to happen at a party. So instead I started making increasingly elaborate fake jobs that I’d talk about doing, until I finally settled on “founder of an asteroid impact startup.”
The Open Asteroid Impact startup bit was a hoot! A lot of people enjoyed it. Though not always for the right reasons. There was at least one party where multiple people non-sarcastically complimented me on my startup and a VC expressed a lot of interest and asked for contact info to learn more.
So come April Fools’ I already had a lot of material prepared from various people riffing on my fake job, and pre-prepared answers to some obvious questions on the startup.
Retrospectives and Lessons
"[They] tell all the truth but tell it slant -- success in Circuit lies" - Emily Dickinson on deceptive alignment
Creating Open Asteroid Impact taught me something important: we desperately need better narratives around AI safety. The current discourse is failing not because nobody understands the technical risks, but because the entire framing feels like a confused pile of corporate doublespeak, doomsday preaching, incomprehensible technical jargon, misguided wokeism or nationalism, and self-proclaimed Oppenheimers eager to create God. (Sometimes it’s even the same people doing all of the above!)
The enthusiastic response to the site, from AI safety researchers to policymakers to complete outsiders, suggests many people are eager to find new ways to discuss these issues. Satire works because it jolts people out of their entrenched narratives and makes them see familiar arguments with fresh eyes. But it’s not enough. We need stories, analogies, and frameworks that make the real challenges concrete without sacrificing linguistic precision or resorting to either corporate sanitization or emotionally manipulative narratives.
If I wanted to focus on AI comms and build on this project, I'd work on creating more accessible content that bridges the gap between technical AI safety work and public understanding. Not more clever in-jokes (though those help), but clear, honest communication about what we're actually trying to solve and why current approaches fall short.
The asteroid analogy resonated because everyone intuitively understands that hurling space rocks at Earth is dangerous, regardless of who's doing it or why (though I’m sure Don’t Look Up helped!). We need to find equally intuitive ways to communicate about AI risks - ways that don't require a PhD in machine learning or a willingness to decode corporate PR.
Until then, at least we can laugh at the absurdity. And perhaps one day, hopefully one day soon, that laughter will actually inspire people to build solutions to all of the problems that we’re satirizing.