Each Part covers one decade. Read them in order for the full story, or jump to any era that interests you. Every decade ends with a cliffhanger โ because that's exactly what it felt like to live through it.
You've heard the names. ChatGPT. Claude. Gemini. Midjourney. Every week, it seems like a new AI tool arrives that either amazes or terrifies the world.
But here's what nobody tells you: the story of how we got here is one of the most dramatic, heartbreaking, thrilling stories in all of human history. It involves a British mathematician persecuted by his own government. A summer conference that launched a revolution. Decades-long winters where everyone gave up. A chess match that shook the world. And finally โ an explosion of intelligence that nobody fully saw coming.
Ready? Let's go back to 1950 โ where it all began with one impossible question.
It is 1950. The world is still recovering from the most devastating war in history. Computers โ enormous, room-sized machines โ have just helped the Allies crack enemy codes and win the war. But one man isn't thinking about what computers have done. He's obsessed with what they could do.
His name is Alan Turing. He is 38 years old, brilliant beyond measure, and quietly revolutionary. Working at the University of Manchester in England, Turing asks a question that will echo through history for the next 75 years:
"Can machines think?"
In October 1950, he publishes a paper called 'Computing Machinery and Intelligence.' In it, he proposes what he calls 'The Imitation Game' โ a test where a human judge has a conversation via text with both a human and a machine. If the judge cannot reliably tell which is which, the machine has passed. Today we call this the Turing Test, and we are still debating it.
British mathematician Alan Turing publishes a paper asking if machines can think. Most of the world laughs. A small group of researchers takes it deadly seriously.
Six years later, in the summer of 1956, something remarkable happens. A group of America's brightest minds gather at Dartmouth College in New Hampshire for what sounds like an academic conference. But what actually takes place that summer is the birth of an entirely new field of science.
The organiser is a 29-year-old mathematician named John McCarthy. Bold, confident, and visionary, McCarthy has already decided what to call this new field before the conference even begins. He coins two words that will one day appear in every newspaper on earth: Artificial Intelligence.
The 1956 Dartmouth Conference is officially considered the founding of Artificial Intelligence as a field. John McCarthy is 29 years old. He has no idea what he has just unleashed.
But 1950 has a shadow that most AI histories skip past. Alan Turing โ the man who arguably started it all โ will never see the revolution he sparked. In 1952, the British government prosecutes Turing for being gay. He is subjected to chemical castration as 'treatment.' In 1954, aged just 41, he is found dead โ a half-eaten apple beside him, the cause never officially confirmed.
The father of computer science and the man who first asked if machines could think is gone before the field he inspired is even officially named. The story of AI begins with a tragedy.
Now someone had to actually build the thing.
The 1960s arrive and AI researchers are flying high. The government is pouring money into their labs. Researchers are making bold, sweeping predictions that machines will match human intelligence within a generation.
And then โ something extraordinary happens in a basement at MIT. A computer scientist named Joseph Weizenbaum builds a program called ELIZA. It is designed to simulate a psychotherapist โ one that responds to your statements by reflecting them back as questions. If you say "I feel sad," ELIZA responds: "Why do you feel sad?" Simple pattern matching. No real understanding whatsoever.
And yet โ people fall for it completely.
Weizenbaum's own secretary โ who knows ELIZA is a program โ asks him to leave the room so she can have a private conversation with it. Real therapists begin suggesting it could replace human psychologists. Weizenbaum is horrified.
Throughout the 1960s, the promises grow bigger and bigger. Marvin Minsky tells Life magazine in 1970 that "within a generation, the problem of creating artificial intelligence will be substantially solved." Herbert Simon predicts in 1965 that "machines will be capable, within twenty years, of doing any work a man can do."
These are not fringe voices. These are the most respected scientists in the field, speaking with total confidence. They are about to be spectacularly, catastrophically wrong.
Language, common sense, physical movement, visual understanding โ things humans do effortlessly โ would turn out to be almost impossibly hard to recreate in machines. The "easy" problems were actually the hardest.
Now came the reckoning.
It is 1973. The British government commissions a report to evaluate AI research. What Sir James Lighthill writes is devastating. After thoroughly reviewing all AI research, he concludes that virtually none of the grand promises of the 1950s and 1960s have been delivered.
The report lands like a bomb. The British government immediately cuts almost all AI funding. The American funding agencies follow. Researchers who have dedicated their careers to AI find their grants cancelled, their labs shuttered.
The brilliant young scientists who had gathered at Dartmouth with such hope are scattered. Some leave academia entirely. Others pivot to adjacent fields. The ones who stay find themselves working in near-total obscurity, defending their work to sceptical colleagues and hostile funding committees.
But not everyone gives up. In quiet corners of universities, a stubborn minority keeps working. They believe the winter is temporary. They believe the dream is real.
A small group of researchers refuses to quit during the AI Winter. Many of the ideas they quietly develop during this dark period will form the foundation of the AI revolution decades later. The winter is not the end. It is incubation.
But under the ice, something was quietly growing.
The 1980s bring a thaw โ and this time it comes from an unexpected direction: Japan. The Japanese government announces the Fifth Generation Computer Project in 1982 โ a ten-year, billion-dollar national programme to build intelligent computers. The announcement sends shockwaves through the United States and Britain. Suddenly, AI is a matter of national competitiveness.
The funding floods back in. This time, researchers focus on Expert Systems โ programs that encode the knowledge of human experts in specific fields. MYCIN can diagnose blood infections better than most medical students. XCON saves DEC millions of dollars per year. For the first time, AI is delivering real, measurable business value.
By the mid-1980s, Expert Systems are a billion-dollar industry. Companies are paying millions for AI programs that can replicate the decision-making of their best human experts. Silicon Valley starts paying attention.
And then โ it happens again. Expert systems are extraordinarily expensive to build and maintain. The systems cannot learn. The systems cannot adapt. When the world changes, they have to be completely rebuilt. By 1987, the market for AI hardware collapses almost overnight. The second AI Winter begins.
In 1986, a paper by Rumelhart, Hinton, and Williams revives interest in neural networks โ computing systems loosely inspired by the human brain. Almost nobody notices. But a few stubborn believers see it as the key to everything.
And somewhere in the math โ something extraordinary was waking up.
May 11, 1997. New York City. The Equitable Center on 7th Avenue.
On one side of the chessboard sits Garry Kasparov โ arguably the greatest chess player who has ever lived. The reigning world champion. A man with an IQ estimated at 190 and an intensity that has terrified opponents for decades.
On the other side of the board is nothing. An empty chair. The moves arrive via a mechanical arm connected to a computer the size of a refrigerator. Its name is Deep Blue, and it belongs to IBM.
The match is the sixth and final game of a rematch series. Kasparov is rattled. In the previous game, Deep Blue had made a move so strange, so seemingly illogical, that Kasparov became convinced the machine was being secretly guided by grandmasters. It wasn't โ it had simply found a move that no human would have considered.
Game six lasts less than an hour. Kasparov, for the first time in his career, resigns after just 19 moves.
A machine has beaten the best human mind at the game humans believed defined intelligence.
The New York Times. The Times of London. Le Monde. Every newspaper on earth runs the story. "The Brain's Last Stand" reads one headline. The question is no longer whether machines can match humans โ but what humans are even for.
While the chess match captures the world's imagination, something even more transformative is happening. In 1991, Tim Berners-Lee publishes the first website. By 1999, there are over 3 million. And all of these websites are generating something AI researchers have always desperately needed but never had: data.
Neural network researchers realise that their algorithms have always been fundamentally correct โ they just lacked training data. The internet is about to give them more data than they could have dreamed of.
The pieces were moving into place for something nobody was fully prepared for.
The year 2000 arrives and the dot-com bubble promptly bursts. Billions of dollars evaporate overnight. But in a modest office in Mountain View, California, two PhD students from Stanford โ Larry Page and Sergey Brin โ are quietly building a search engine that uses a primitive form of machine learning to rank web pages. They call it Google.
Google's PageRank algorithm is one of the first AI systems that most ordinary people interact with, without ever knowing it. Over the following decade, Google will invest billions into machine learning research, becoming the company that quietly builds more AI infrastructure than any other organisation on earth.
January 9, 2007. Steve Jobs takes the stage at Macworld in San Francisco and announces the iPhone. He calls it "a revolutionary product that changes everything." He has no idea how right he is โ but for reasons beyond what he can see that day.
By 2010, there are 500 million smartphones in the world. By 2015, that number hits 2 billion. Each one is a data collection device feeding AI systems with richer, more detailed information about human behaviour than anything previously imaginable.
In 2006, a 59-year-old British-Canadian researcher named Geoffrey Hinton publishes a paper that almost nobody reads at the time. He has been working on deep neural networks for over twenty years, through both AI winters, through scepticism and ridicule. The paper shows that deep neural networks can be trained efficiently if you start them in the right way. It is the key that unlocks a door that has been jammed shut for decades.
One competition was about to prove to the world that everything had changed.
September 2012. A global image recognition competition called ImageNet Challenge is held. For years, the best error rate has been improving slowly. The best in 2011 is around 26 percent.
Then Geoffrey Hinton's team from the University of Toronto enters. Their system โ a deep neural network called AlexNet โ achieves an error rate of 15.3 percent. Not a small improvement. Not a modest step forward. A halving of the error rate in a single year. The AI research community is stunned.
Within two years, every serious AI research lab in the world has abandoned its previous approaches and pivoted to deep learning. Google, Facebook, Microsoft, Baidu โ all of them begin restructuring their entire AI research strategy around neural networks.
March 2016. Seoul, South Korea. The world's best Go player, Lee Sedol, sits across from a screen displaying the moves of a program called AlphaGo, built by DeepMind โ a small British AI company acquired by Google for $500 million.
AlphaGo wins the first game. Then the second. Then the third. In Game 2, it makes a move โ Move 37 โ that no human would ever have considered. The commentators go silent. One expert starts to cry.
Professional Go players later describe Move 37 as "a move from God." It is so unexpected, so creative, so strategically profound that it forces a complete rethinking of the game that humans have played for 2,500 years. A machine has taught the masters something new.
June 2017. Eight researchers at Google publish a paper titled: "Attention Is All You Need." They introduce a new neural network architecture called the Transformer. At the time it seems like another incremental improvement. In reality, it is the moment that makes everything you are reading about today โ ChatGPT, Claude, Gemini โ possible.
The authors don't fully appreciate what they have created. They go off to found their own AI companies. Nobody calls it a revolution. But it is one.
All that was needed was someone willing to scale it beyond anything ever attempted.
November 30, 2022. San Francisco.
A company called OpenAI quietly releases a product called ChatGPT. No press conference. No advertising campaign. Just a link posted on the internet. Within five days, one million people have tried it. Within two months, 100 million people have signed up โ the fastest-growing consumer application in history.
100 million users in 60 days. For context: Instagram took 2.5 years to reach 100 million. TikTok took 9 months. ChatGPT does it in 60 days. That is not a product launch. That is a cultural event.
At Google, the reaction is described internally as a "code red" โ the most serious category of emergency. Google has spent fifteen years building the most sophisticated AI research operation in the world. And yet a 500-person startup in San Francisco has just demonstrated something so compelling it threatens Google's core business: search.
Why would you type keywords into a search engine and wade through ten blue links when you can just ask a question and get a direct answer? Google's founders are reportedly called back from retirement to help manage the crisis.
By 2023, the AI race has two main competitors at the frontier: OpenAI, backed by Microsoft with $13 billion invested, and Anthropic โ founded by former OpenAI employees who left over concerns about safety. Anthropic's AI system, Claude, is seen as the most thoughtful and careful of the frontier models.
Both companies release increasingly powerful models at a pace that leaves the world breathless. GPT-4. Claude 2. Claude 3. GPT-4o. Each one more capable than the last. By 2025, these systems are embedded in businesses, used by hundreds of millions of people every day, and the subject of urgent debate in governments around the world.
In 2026, AI is not coming. It is here. The question is no longer "will machines be intelligent?" It is "what do we do now that they are?" The story that Alan Turing began with a single question in 1950 has arrived at an answer โ and the answer raises ten thousand more questions.
Epilogue: The Story Isn't Over
In 1950, Alan Turing asked if machines could think. He was prosecuted by his government, stripped of his dignity, and driven to his death before he could see even the beginning of the answer.
Through two brutal winters, through failed promises and cancelled funding, through ridicule and despair, a stubborn band of believers kept the dream alive. Geoffrey Hinton worked for forty years in relative obscurity. The ideas he refused to abandon now power every major AI system on earth. In 2024, he was awarded the Nobel Prize in Physics.
In November 2022, a product called ChatGPT launched without fanfare and changed the world in sixty days.
The machines can see, hear, read, write, code, compose, and converse. They can beat the world champion at chess, at Go, at video games. They can diagnose diseases, write poetry, and hold conversations increasingly indistinguishable from a human's. What they cannot do โ what no one has yet built โ is truly understand. To feel. To care. To want.
The story of artificial intelligence is the story of human ambition, human stubbornness, and the ancient dream of creating something that thinks. It began with one man asking one question in 1950.
Welcome to the most important story ever told. ๐