On March 24, 2026, the man who built ChatGPT told his team he is no longer in charge of keeping it safe. A new model called Spud is almost ready. And the real reason behind all of it will surprise you.
It was a Tuesday. March 24, 2026. And inside OpenAI — the company behind ChatGPT, the most used AI tool on the planet — Sam Altman called a staff meeting and said something that nobody expected.
He told his team that from that day forward, he would no longer be directly overseeing the company's safety and security teams.
Let that land for a moment. Sam Altman is not just the CEO of OpenAI. He is arguably the most powerful person in the AI world right now. He has been the face of every safety conversation, every government hearing, every public debate about whether AI is going to help or hurt the world. And now he was saying: someone else will handle that.
The same week, he also announced something else — something that had been secretly cooking inside OpenAI for months. A new AI model, nicknamed Spud, had just finished its training. And it was coming. Very soon. Within weeks.
Two giant news stories, same day. So what is actually going on? Let's decode it — chapter by chapter.
"A very powerful model will come out within a few weeks. The whole team believes this model can meaningfully accelerate the overall economy."
— Sam Altman, to OpenAI staff, March 24, 2026To understand why this matters, you need a little bit of history. And it is genuinely one of the most dramatic stories in modern tech.
OpenAI was founded in 2015 as a non-profit. Its entire purpose was to build AI that was safe and beneficial for all of humanity. Safety was not a side feature — it was the whole reason the company existed. Sam Altman was part of that founding team.
So this is not a one-day story. It is the end of a chapter that has been building for years. Every time there was a conflict between "move fast" and "be careful," OpenAI inched a little further in one direction. And on March 24, that direction became official.
Before we go further, let's decode some terms. Because this story is full of words that sound important but never get explained properly.
At a company like OpenAI, "safety" means making sure the AI doesn't do harmful things. This includes preventing it from helping someone build a weapon, generating content that hurts people, or behaving in ways that humans did not intend. It is the team whose job is to ask: "What could go wrong?" before releasing something to the world.
Alignment is about making sure AI systems do what humans actually want — not just what they are technically asked to do. Think of it like training a new employee. You don't just want them to follow every instruction literally. You want them to understand the spirit behind the instructions and behave sensibly even in situations nobody planned for.
"Safety" is about what the AI model itself does — its behaviour, its outputs, its values. "Security" is about protecting the systems and infrastructure around the AI from external attacks — hackers, data breaches, attempts to manipulate or steal the technology. Think of safety as the rules inside the building, and security as the locks on the doors.
So when we say Sam Altman stepped back from both, we are saying he is no longer the person making decisions about either what the AI does — or how it is protected from the outside world.
This is the question everyone was asking on March 24. Altman is stepping back — fine. But who exactly is stepping up? And are they the right people for this?
Mark Chen is one of the most respected researchers inside OpenAI. He led the teams that built DALL-E (the AI image generator), Codex (the code-writing AI that became GitHub Copilot), and GPT-4's visual capabilities. He became Chief Research Officer in 2025. He is now the person responsible for safety — meaning all the work on making sure OpenAI's models behave properly falls under him. He is not a newcomer. He has been inside the walls of OpenAI from almost the very beginning.
Greg Brockman is one of the co-founders of OpenAI. He was there on day one. After a brief sabbatical in 2024, he returned as President and leads the scaling of OpenAI's technical infrastructure — the actual computers, servers, and systems that run the AI. Security now falls under him, which makes logical sense given that security is deeply tied to the infrastructure he already oversees.
The picture that emerges is this: safety and security are not being handed to outsiders, or being deprioritised in the sense of being removed. They are being embedded inside the two biggest technical teams at OpenAI — research and infrastructure. The argument Altman is making is that safety should live inside the engineering work, not sit separately above it.
When safety sits inside the research team — the same team that is under pressure to ship the next model quickly — does it truly stay independent? When the people who build the product also judge whether the product is safe enough, who is watching the watchers? That tension is at the heart of this story, and it has no clean answer.
On the same day as the safety announcement, Altman told his team something else. OpenAI had just completed the pre-training of its next major AI model. And they were calling it Spud.
When an AI model is "pre-trained," it means the first and most intensive phase of teaching is done. Imagine you've spent years reading every book, article, and website on the internet and your brain has absorbed all of it. That's pre-training — the raw foundation. After pre-training comes fine-tuning, testing, safety evaluation, and then public release. So "pre-training complete" means: the main work is done, and the finish line is close.
Nobody outside OpenAI knows exactly what Spud is yet. Is it GPT-6? Is it GPT-5.5? Altman deliberately did not say. What he did say was that it is "a very strong model" and that it could "meaningfully accelerate the overall economy." He expected it to be released publicly within a few weeks of that announcement.
Inside big tech companies, major projects get internal codenames to keep things secret — and to avoid leaks before they're ready. "Spud" is just a potato. OpenAI has a history of choosing simple, mundane codenames precisely so they don't attract attention. The name means nothing. The model, however, appears to mean everything.
The same week Spud was announced, OpenAI also quietly shut down Sora — its AI video generator that launched with enormous fanfare in 2024, and even had a $1 billion partnership deal being negotiated with Disney. That deal is now dead. The Sora app is being switched off entirely. Why? Because Sora consumes enormous amounts of computing power. And OpenAI needs every scrap of that power for Spud and the models that come after it.
It is a stark illustration of priorities. A product that was making headlines a year ago — gone. In AI, today's sensation can become tomorrow's distraction remarkably fast.
Here is the part of the story that does not get enough attention. Why is Sam Altman stepping away from safety? Not because he doesn't care. But because, in his view, right now there is something even more urgent — and it is not a software problem. It is a construction problem.
Building advanced AI today requires an almost incomprehensible amount of physical hardware. We're talking about massive buildings the size of several football fields, filled with tens of thousands of computers running day and night. These buildings consume more electricity than small cities. They need to be cooled with water or liquid systems to prevent them from overheating. And they need to be built at a pace that the construction industry has never seen before.
Infrastructure, in the AI context, means all the physical stuff that AI needs to exist and run. This includes data centers (giant buildings packed with computers), chips (the specific processors that AI runs on — Nvidia makes the most important ones), power supply (enormous amounts of electricity), cooling systems (to stop the chips from melting), and supply chains (the global network of factories and shipping routes that get all this hardware to where it's needed). Without infrastructure, there is no AI.
OpenAI, as of early 2026, does not own a single data center. It relies entirely on partners — Microsoft, Amazon, Oracle — to provide the physical computing power it needs. That is a remarkable vulnerability for a $730 billion company. And Altman knows it.
In February 2026, OpenAI raised $110 billion in a single funding round — the largest private fundraise in the history of any company. Amazon put in $50 billion. Nvidia put in $30 billion. SoftBank put in $30 billion. The company's target is to spend $600 billion on computing infrastructure by 2030. Those are not technology numbers. They are national infrastructure numbers.
"Anything at this scale — it's just like so much stuff goes wrong."
— Sam Altman, at BlackRock's Infrastructure Summit, March 2026He was talking about the Stargate project — OpenAI's flagship data center complex being built in Abilene, Texas. A severe weather event had temporarily knocked it offline. Supply chains were causing delays. Deadlines were slipping. This is why Altman himself needs to be focused on infrastructure. Because nobody else at OpenAI has the relationships, the fundraising credibility, and the political access to make deals at that scale happen.
Imagine you are running a bakery that has suddenly become the most popular in the world. Millions of people want your bread. But you don't have enough ovens. Your suppliers are running out of flour. Your electricity bill has become so large it threatens to bankrupt you. In that situation — do you focus on the recipe, or do you focus on getting more ovens built? Altman is choosing the ovens. And he believes that if the ovens are not built fast enough, it doesn't matter how good the recipe is.
Honest question. Honest answer. The truth is: this is both. At the same time. And anyone telling you it's entirely one or the other is probably selling you something.
Mark Chen is genuinely excellent — a researcher who has been inside the safety conversations from the beginning. Greg Brockman is a co-founder who has always cared about the mission. Embedding safety inside research, rather than above it, might actually produce better outcomes because the people doing the safety work are the same ones who understand the models most deeply. And if OpenAI loses the infrastructure race to a competitor who cares less about safety, the world ends up worse off anyway.
Every single time safety has been restructured at OpenAI, it has come with a story about how the new structure is actually better. And every single time, independent researchers have left shortly after, citing pressure to move fast over move carefully. Safety teams that report to people who are also judged on how fast they ship products face a built-in conflict of interest. The pattern here is hard to ignore.
What makes this moment genuinely strange is that Altman himself seems to be aware of it. Just one day after the announcement, on March 25, he posted publicly that "AI will also present new threats to society" and that "no company can sufficiently mitigate these on their own." He simultaneously stepped away from safety oversight — and warned the world that the risks from AI are too large for any single company to handle.
That is not hypocrisy, exactly. It might be the most honest thing he has ever said. He might genuinely believe both things at the same time: that infrastructure is the most urgent priority right now, and that safety is an unsolved problem that goes beyond any one company's capacity to fix. The two things can coexist. But holding both simultaneously requires a level of nuance that a five-second news headline will never give you.
Here is the practical takeaway from everything above.
In the next few weeks, you will almost certainly hear about a new OpenAI model launching. That is Spud — though it may well be released under a different name entirely. If Altman's words hold up, it could be the most capable AI model that has ever been released to the public. That means the tools you use — ChatGPT, Copilot, and anything else powered by OpenAI technology — will get significantly more powerful, probably overnight.
The safety change is slower-moving and harder to see in your daily life. You will not notice a difference in how ChatGPT behaves next week because of this restructuring. What you might notice, over months and years, is whether the safety commitments that OpenAI makes in public continue to match what actually happens inside the product. That is the thing worth watching.
What is also worth watching — and this is the bigger picture — is the infrastructure race itself. The companies that win the AI era will not be the ones with the cleverest ideas. They will be the ones who can build enough physical computing capacity, fast enough, in the right places. That is an old story, actually. It is the same story that decided who won the industrial revolution, the railroad age, and the internet age. In every era, the infrastructure builders eventually become the most powerful players.
Altman has clearly read that history. Whether the move he is making is wise, reckless, or both — the scale of it is extraordinary. A man running a $730 billion AI company is choosing to spend his time managing construction projects, supply chains, and billion-dollar fundraising deals. Because he believes, with everything he has, that whoever builds the biggest engine wins the race.
Sam Altman did not abandon safety — he handed it to two people he trusts deeply, so that he could focus on something he believes matters even more right now: building the physical infrastructure that will determine whether OpenAI still leads the AI world in 2030.
This is Day 20 of 90. Every day, one topic decoded in plain English — no jargon, no hype, just the honest story behind the headlines.