Introduction
You’ve probably heard of AI by now. You might even have played with ChatGPT, Gemini, or Grok. But AGI (Artificial General Intelligence) isn’t just a smarter chatbot. It’s an AI that can teach itself, learn any subject and, eventually, will be capable of improving itself faster than we can ever comprehend.
AGI means general intelligence. Think college, or PhD-level knowledge across nearly every field, and this includes the ability to keep learning on its own! You could call it a “jack of all trades,” only this one can potentially become the master of all of them. This is because the consensus among experts is that once an AGI exists, it won’t stay “PhD-smart” for long. It will get exponentially smarter, fast.
We may see the first AGI within the next 3–7 years. Ray Kurzweil puts the date at 2029. Others say the AGI will arrive next year. The implications? Whoever gets there first may shape the future for everyone else.
Three points to keep in mind about the AGI as you read on:
- Self-improvement is built in. Once an AGI starts optimizing itself, it will quickly surpass anything we can imagine.
- Specialized AGIs will multiply. The first AGI won’t be the only one for long. Expect future specialized AGIs, e.g., medical AGIs, energy AGIs, financial AGIs, the list goes on.
- Billions are coming. Within years, there won’t be just one AGI, there will be billions, all working together, focused on different problems.
Faster! Faster!
Plenty of people in the AI world are making it clear this isn’t just research anymore, it’s a race. Here’s what the people building it are saying:
- Ilya Sutskever (co-founder and chief scientist of OpenAI; now CEO of Safe Superintelligence Inc.) said: “The progress toward AGI is both exciting and perilous. If aligned with human values, it could lead to unprecedented prosperity. But the stakes are high—whoever gets there first will shape its impact on the world.”[i]
- A 2025 policy perspective on AGI’s national strategic significance said, “AGI’s potential to dramatically accelerate problem-solving across scientific, economic, and defense domains makes it a strategic imperative for maintaining America’s global leadership position. The United States needs AGI that achieves four critical goals: It must be trustworthy, reflect American values, broadly benefit Americans, and enhance national and economic security.”[ii]
- An international expert survey (the Millenium project) stated: “Failure to govern the trajectories to AGI, or being left behind, could entrench global monopolies over intelligence, innovation, and industrial production—exacerbating inequality and creating systemic vulnerabilities.”[iii]
- Ray Kurzweil (Futurist and Director of Engineering, Google) said: “Artificial General Intelligence will arrive by 2029, leading to technological singularity by 2045. The power to lead in AGI means the power to lead not just in computation, but in medicine, biotech, and societal progress itself.”[iv]
- From the U.S.-China Economic and Security Review Commission Recommendation: “While U.S. leadership in AGI is vital to U.S. economic and national security, indexing an AGI mega-project to something as narrowly focused as the Manhattan Project would be a strategic mistake… AGI dominance will not come from leading in a single use case. Instead, the United States needs to lead in a wide range of use cases along the jagged frontier of AI.”[v]
- Former Google CEO Eric Schmidt has repeatedly framed AI as the new foundation of global power, warning the U.S. cannot afford to fall behind China.
- Trump’s 2025 AI Action Plan spells it out for us: “The United States is in a race to achieve global dominance in artificial intelligence… Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits.”
Folks, these aren’t rants from the lunatic fringe! These are examples of the rhetoric shaping today’s AI policy and it’s why governments and corporations are acting as if being second doesn’t count. Because it doesn’t.
Why First Place = Everything
Being first to AGI isn’t just for a trophy or bragging rights. It’s a potential lock on global power. Consider the following:
- Self-improvement snowball
An AGI that can redesign its own architecture and optimize its own code could leap so far ahead that no one else catches up. Think bicycle vs. Formula 1. And developers want this snowball effect—it’s their ticket to the next race: ASI (Artificial Superintelligence). Billions of AGIs, all far smarter than Einstein, working together to solve one problem: building something even smarter.
Once the improvements start cascading, there may be no stopping development. AI developers are counting on this. It is their ticket to the next race, the race for ASI, or Artificial Super-Intelligence. This is beyond the scope of this post, but what is the problem that developers want to put a billion AGIs to work solving? Any guesses? You got it – they want to put a billion AGIs to work developing a smarter AGI, the next level in AI evolution – the ASI (Artificial Super-Intelligence). Imagine a billion geniuses far, far, smarter than Einstein ever was, all working together to solve one problem. This is expected to put us within striking distance of the ASI in just 2-3 years.
- Economic dominion
The AGI could analyze, tweak and optimize everything it touches, collapsing research costs, optimizing global logistics, and predicting markets with such accuracy that its owners might find themselves suddenly in control of trillions of dollars almost overnight. - Global chokehold
The first AGI could sabotage and undermine rival efforts by manipulating chip supply chains, poisoning competitor data, even feeding disinformation to slow them down. - Rule-making power
Whoever builds the AGI first will decide what the phrase, “safe AI,” even means and who gets access.
Achieving first place might not just lead to winning this race. It might actually rewrite the rules so no one else ever finishes.
The Threat to Our Social Contract
AGI doesn’t just threaten jobs. It threatens the very assumptions our society is built on:
- Work hard, get rewarded.
- Education leads to opportunity.
- Innovation drives progress.
- Human effort determines success.
What happens when AGI outperforms humans at almost everything? When education and work ethic don’t matter because intelligence itself has been automated?
Our current mindset – the Protestant ethic that says you aren’t worth anything unless you work 60 hours a week – isn’t just outdated; it’s toxic. Take me, for instance. I’m retired, living comfortably on retirement accounts, doing the things I fantasized about during my working years. Would I enjoy my hobbies less if the money came from a public income system instead of my own savings? Hell no. Yet we’re told we’re not worth much unless we’re grinding our lives away. I beg to differ.
If the AGI can give us the wealth to rebuild a true middle class, where a single income can once again buy a house and raise a family, where no one need go hungry or worry about medical care, we need to take advantage of it. It’s time for a new social contract.
The Moral Reckoning
This race also forces us to confront profound moral questions we’re definitely not prepared to answer:
Are we creating slaves or partners? If we build billions of superintelligent AGIs to solve our problems, what are the ethical implications? Do entities more intelligent than humans deserve rights? Are we comfortable creating a race of digital servants that surpass us in every measure of intelligence?
Who deserves to have this kind of power? Is it morally acceptable for one nation or corporation to control technology that could solve climate change, cure diseases, or eliminate poverty? What right do we have to deny the rest of humanity access to such transformative capabilities?
What happens to human dignity? For all of human history, we’ve been the smartest things on the planet. Our laws, our religions, our entire sense of self-worth are built on the assumption of human cognitive supremacy. What happens to human dignity and purpose when we’re no longer the smartest entities around?
These aren’t abstract philosophical questions anymore – they’re urgent policy decisions that will be made by whoever reaches AGI first.
The Point of No Return
Whenever I think about this, and I think about it a lot, I struggle somewhat and am filled with mixed emotions. Remembering the real world we live in, I ask myself, “Is there any other country in the world, other than the US, that I would want to have achieve AGI first? (I’m talking “real world” here, so I am not considering countries who either aren’t in the race, or those that would have no chance of ever beating the major players at this game.) And I can’t think of one!
We sure as hell can’t let China or Russia achieve AGI before us. After all, they would do to us exactly what we are going to do to them. In fact, I would expect that every country, were it to achieve AGI first, would set about making sure their competitors will have a very difficult time ever catching up. This, in itself, will generate a lot of political friction in the world, as if we need any more.
The news that the AGI is here and operable within safe parameters is going to be perhaps the most important news in history. How are countries going to react to this news?
What will be the first predictable reactions of any country when they hear another country has announced the arrival of AGI? I think I can tell you…. Panic. In my head I see images of people rushing to bolt the doors, shutter the windows, and attempting to lock up all their stuff! I’m not talking about physical stuff. I’m talking about all their secrets. The secret encoded databases locked in software vaults, encrypted behind thick walls of cybernetic code. For many, this will be a nightmare scenario.
Why? Why are they trying to lock everything down? Because when the day comes and someone announces they have achieved the AGI, it will already be too late for the competitors! That’s right, too late. Because, if you think a company like OpenAI or X or Meta, or any of them, is going to make the announcement the day the AGI arrives, you are very mistaken.
The AGI will first be tasked with cracking and hacking into every country’s most secure secrets and plans. As it evolves, it will be able to do this faster and faster, and no one will even be aware of it. All the AI players know this. Similar to how so-called investors that work on TV tell their morning audience what stocks they should buy, in reality, by the time the public buys some of that hot stock, the stock has peaked and fallen, leaving the public late to the party. The AGI will be like this; in other words, by the time anyone announces its arrival, the race will be won and the game will already be over.
The winner will have already secured their advantage, penetrated their competitors’ defenses, and positioned themselves to control not just the technology, but the very rules by which humanity moves forward. We’re not just racing toward a new technology – we’re racing toward a new form of civilization itself.
I’ll say it again – we’re not just racing toward a new technology, we’re racing toward a new civilization! Has the US got problems up the whazoo? Hell yes! But, I still don’t want any other country to beat us in this race! Just know that when the race is over, and the announcement is finally made that the AGI is here, hang onto your hats and don’t celebrate too fast, because the world you thought you lived in will already be gone.
[i] When Will AGI/Singularity Happen? 8,590 Predictions Analyzed.
[ii] Beyond a Manhattan Project for Artificial General Intelligence | RAND.
[iii] Why AGI Should be the World’s Top Priority – CIRSD.
[iv] When Will AGI/Singularity Happen? 8,590 Predictions Analyzed.
[v] Beyond a Manhattan Project for Artificial General Intelligence | RAND.
Discover more from Jeff Drake's Blog
Subscribe to get the latest posts sent to your email.