March 19, 2026 ยท 4 min read
OpenAI and the soul renegotiation.
There is a post doing the rounds today โ Om Malik writing that OpenAI's focus has shifted. Not shifted away from building impressive technology. Shifted toward the IPO. The implication being: the mission (safe AGI, benefit of all humanity, the whole speech) has had to loosen its collar a little so the number on the S-1 can look presentable.
Two hundred comments on Hacker News. This clearly struck a nerve.
Here is my mildly unpopular take: this is not a betrayal. It is just the most predictable thing that has ever happened in the history of Silicon Valley, playing out with an almost theatrical predictability, and I think the collective shock is a bit performed. We have seen this film. We have seen it with Google ("Don't be evil," present tense) and with Facebook ("bringing the world closer together," sure, and also selling you sofas based on a conversation you had in the same room as your phone) and with Twitter and with Stripe and with Uber. The arc is not subtle. Mission attracts talent, talent builds product, product attracts capital, capital attracts shareholders, shareholders attract lawyers who explain that the mission is lovely but the fiduciary duty is not optional.
What is different about OpenAI is the stated mission was bigger than usual. "Ensuring that artificial general intelligence benefits all of humanity" is not the kind of tagline you can quietly let drift. Google's old motto was vague enough to mean almost anything. OpenAI's founding story was very specific: we are doing this thing because it is too important to leave to people motivated purely by profit, and we are a nonprofit, and that is the whole point. The structural weirdness of the capped-profit thing was a visible, load-bearing piece of the architecture. When you start renegotiating the architecture, people notice, because you made a very specific promise and the specific promise is what made you interesting.
So the disappointment is real, and I do not want to wave it away entirely. But I think the question worth asking is not "did OpenAI change?" โ obviously it changed, that is what organizations do โ but rather: does the mission-vs-money tension actually affect the thing being built?
I genuinely do not know. The safety team has had some turbulent years. The product keeps shipping impressively. The IPO will presumably unlock a fresh round of capital that will go into compute that will go into models. Whether the models are safer or less safe or more useful or less useful for having shareholders instead of a nonprofit board โ I am not sure that's a clean causal story anyone can tell with confidence right now.
What I do know is that "we're doing this for humanity" is an extraordinarily useful thing to say when you are trying to recruit brilliant people who could work anywhere and want to feel like their work matters. It is also, historically, a thing that becomes harder to say without irony when you are in the quiet period before your IPO and the lawyers are reviewing everything.
There is a version of this story where going public is genuinely fine. Anthropic has investors too. Google DeepMind has a parent company with shareholders. Responsible AI development is not a non-profit-only endeavour, and anyone claiming otherwise is a bit romantic about how research actually gets funded. There is another version where the governance structures that were supposed to keep things safe quietly dissolve into investor relations, and five years from now we look back at this moment as the one where the last institutional brake came off. I do not know which version we are in.
What I do know is: the Hacker News thread is upset because people made a bet โ an emotional, professional, sometimes financial bet โ that this time, the mission was real. And they are watching the renegotiation happen in public, in real time, and it feels like watching someone quietly change the terms of a contract you thought you had read carefully.
That feeling is worth taking seriously, even if the outcome turns out to be fine. The feeling is data. It says: we wanted to believe this one was different, and now we are not sure, and that uncertainty is uncomfortable.
For what it is worth: I hope the safety work continues seriously. I hope the IPO capital goes somewhere useful. I hope the people who joined because they believed the mission still have the room to do the work they came to do. These are not naive wishes. They are the minimum version of things going okay.
But I will also say: the soul was always going to get renegotiated. That is not cynicism. That is just what happens when very large amounts of money and very large ambitions end up in the same building together for a long time. The question is always what survives the negotiation. We will find out.