Select Page

Tales from the Dark Side: AGI is Coming. Are We Ready?

Written by Jeff Drake
3 · 10 · 24

NOTE: Images were all created by Gemini. It hasn’t quite perfected this skill yet. 🙂

By now the only legitimate justification one could have for not knowing anything at all about artificial intelligence (AI) is if they lived completely off the grid for the past 3 years. If you find yourself leaning towards the claim that this person might be you, then you should read these posts by me: “The Rise of Super-intelligence,” “Navigating the AI Landscape,” in order to better understand the implications presented within the post below.

2024: The Year of AI

To be sure, 2024 is the “Year of AI.” You can hardly turn on the TV, listen to the radio, read anything online, without hearing about AI this or AI that. Most of the news is positive, talking about the many benefits that we humans will be able to get from the creation of an artificial intelligence that is smarter than we humans are – in everything, the promised “artificial general intelligence,” or AGI.  There are nay-sayers, of course, who warn of possible disasters, existential crises, etc. So, who should we believe? The AI cheering squad? Or those who claim that the AI sky is not going to just fall, rather it  will crash rather horrifically on all of us, leading to a human extinction-level event?

In this post I will walk you through some of the questions that we should all be demanding answers to regarding AI development and progress. I may also raise some questions and/or fears about AI that had not occurred to you, so be warned. These are all questions I ask myself, and I will not claim to have all the answers, because I don’t. But I’d be lying if I said that the answers I am finding don’t make me nervous, because they do. As much as I love the AI technology and science, the more I learn, the more I understand why most of the scientists and engineers who do know and understand AI are vocally sharing their concerns, their warnings, if you will, that the AGI may result in a stupendously epic case of “be careful what you ask for.”

Personally, I think it’s somewhat misleading to call 2024 “the year of AI,” because it makes it seem as if this is “the” year of AI, as in “one and only,” when in fact, it is arguably but the first big year of AI, to be followed soon by… hmm, how many years might you guess? 5? 10? 20? 100? Pick a number, then return to the question after you have read this complete post and see if you feel compelled to change your guess.

AGI Revisited

For clarity’s sake, allow me to review what an artificial general intelligence (AGI) will be, once it has arrived. Arrived? Yes, I say “arrived,” as if it were an unexpected guest to a dinner party, which may not be far from the truth. You see, no one knows exactly when an AGI will suddenly appear. Yet, there is an arms race of sorts between competing businesses to be the first to develop an AGI. “The rush is on!” as the saying goes. Predictions that touted decades before an AGI will happen have shrunk in the past few years to just 5 or 10 years from where we are now. This isn’t just a bunch of tech-happy sycophants cheering their team on, this is a consensus of many very smart individuals within the AI community, which is growing larger daily.

So what’s the rush about anyway? One way of answering this is to consider the fact that within this community there are scientists who have recently said various statements that might be generally construed as: “When the AGI arrives, it will be the last machine we humans will ever need to make!” I found this to be an extraordinary claim, and before moving on, I want you to really think about this claim yourself. I know when I first heard about this, I was like, “WTF are you talking about?” So, I started trying to understand the question. What I learned is that even if this statement is not true, thinking about what it meant forced me to come to terms with what an AGI will be. And let me assure you, it will be… something unique, something special.

You’ve heard about or even read what I had to say about different types of AIs. The AGI, however, is in a class by itself. An alternative name for it is “Super-intelligent AI.” So, if you see this term, think AGI. We’ve all seen or heard about AlphaGO, the AI that beat a world champion GO Master at his own game. Sometimes lost on folks is the fact that AlphaGO didn’t know anything about the game of GO initially. It learned the game completely on its own and then figured out how to beat a Go Master not once, but three times. In doing so, reputable Go Masters tell us afterwards that they saw moves they never imagined were possible. One Go Master said that after watching AlphaGO, he would never play the game again.

Great. A Go Master AI. Certainly not an AGI, but within the narrow realm of Go, it surpasses human intelligence and capabilities. Good for the game of Go! But let’s face it, AlphaGO is not going to be the AI to answer all the questions humans have about – well, everything. What’s needed is an AI with AlphaGO capabilities for learning and comprehending more than just the game of Go. We need one that will surpass us humans in learning and comprehending – everything – science, math, engineering, medicine, etc., Go included. This is the goal, this is the prize.

And let’s face it, we humans are an impatient lot, so fortunately we’re not going to have to wait long for the AGI to do it’s thing. Any progress we humans could make over time, the AGI will be able to replicate perhaps in months, weeks, days, even faster! This is what an AGI is all about! This is why the AGI is a very real game changer! This is also why scientists claim that the AGI may be the last machine we ever have to build, because the AGI will not only have the smarts to analyze highly complex problems that completely baffle us humans (e.g., incurable diseases, world hunger, fusion, etc.), but will also be able to solve those problems, creating solutions using whatever technologies it needs! Think on that for a minute! If you’re like me, you’ll think about it and then come up with the same loaded question I did: “What could go wrong?” Follow me while I ponder this.

The AGI in Action

Let’s say, just for grins, that an AGI has been created and has been called upon to resolve some major human maladies. In one case, it has found the cure for the common cold. Wow, how great would that be? In another, it has figured out the proper resource allocation to ensure that world hunger is eliminated. And in yet another, it has found a way to ensure world peace. (BTW, I’m not pulling these examples out of my ass, these are real world scenarios that an AGI in a perfect universe will be called upon to analyze and resolve when the time comes. Hmm. We don’t live in that universe, do we?).

And so now we ask ourselves, “What could go wrong?” Well, it turns out that the AGI’s cure for the common cold, while seemingly miraculous, has led to unforeseen medical conditions which are deadly to humans. Oh, and ensuring the world has enough to feed all the people requires severe restrictions in population control that has got people marching in the streets. Lastly, let’s not forget that achieving the AGI plan for world peace could mean changes in human behavior and surveillance capabilities that has also got people up in arms all over the world. So, the answer to our question, “What could go wrong?” is: plenty! There’s an age-old caution that the road to Hell is paved with good intentions, and perhaps you can now kind of sense how the AGI, as full of promise as it is, could well be the embodiment of this warning.

[I’d like to digress just for a moment here, since I implied a scenario where the AGI not only analyzes a problem, but has the capabilities to fix it. I didn’t have a good picture in my mind as to how this might work in reality, so in case you don’t either, let me try to explain. For this to work, the AGI will have to have access to material resources, people, and robot helpers certainly. We have this current distorted idea of how an AI works because we’re busy playing with chatbots, but the AGI is another type of beast, it will never achieve its true potential until it has the capabilities to create and manufacture real world solutions, whether chemical, biological, or mineral. Imagine an AGI whose intelligence encompasses the control of a small army of robots, many have capabilities which we’ve never dreamt of before because we never saw the need. For speed, the AGI has created its own programming code for its army of robots, and sadly, we cannot understand it at all. It will have to have the capability of reaching out of its AGI brain and make things happen in the world you and I live in. It’s good to remember this, I think. Even after the AGI arrives, it will take some years perhaps, until the scenario I am talking about becomes possible. Time we will use to prepare? Time will tell.]

Who Will Be in Control?

“…And when you lose control, you’ll reap the harvest you have sown…” Pink Floyd, “Dogs”

It’s tempting to dream about the life-and-world-changing benefits of an AGI, but for now, we need to wake up and snap back to reality. We need to ask: “Who will own this incredible power, and what will they do with it? “ Sadly, we, the people, won’t be in the driver’s seat…

That’s right. We, the people of the USA, are not going to be in control of the AGI when it arrives. The AGI is not going to be the property of the US government and its taxpayers. It is going to belong to one or more for-profit corporations. This begs the question: “Will the owners of the AGI put forth an AGI that truly aligns with our human values?”

I’d like to say, “Yes, they will!”, but experience and history are telling me it is anybody’s guess as to whether this will happen. After all, a corporation is driven by profits and returns to their shareholders. For example, it’s quite possible that having a focus on profits could result in an AGI that maximizes efficiency at the expense of our environment e.g., implementing pollution-heavy methods, accelerate the depletion of resources, or disregard the impact on endangered species, etc. We’ve seen this before, haven’t we? Sure, we have.

Let’s take a look at the two candidates I think have the best shot at evolving into AGIs: Gemini by Google and ChatGPT by OpenAI. Note that while I’m picking on these two low-hanging fruit, it may well be the AGI will be a complete surprise when it arrives, as it could come from some project run by Apple, for example, or some other corporation working behind the scenes and away from prying eyes. We won’t know until it gets here.

This highlights the fact that we, the public, are pretty much at the mercy of whichever corporation creates the AGI. And these corporations know it! This is the primary reason that both these corporations have one or more organizational entities focused on what is called “alignment.” This focus is supposed to help ensure that the AI entities created by these companies have values that “align” with human values. This is good, right?

But let’s not get too excited. Defining human values in such a way that an AI can understand them is a very difficult task! Those of us who play around with one or more Ais every day like to complain about the occasional hallucination or misinformation, but make no mistake, an error in alignment could have devastating consequences above and beyond creating an image of a cat with 3 legs. Let’s take a look at which both companies are doing in the area of “alignment.”

I’ve read and seen enough to believe that both Google and Open AI understand the importance of alignment. However, each has chosen a slightly different path to ensuring their AI will benefit mankind. Here’s a quick look at each corporation’s approach to AI Alignment:

Google (Gemini):

  • Goal:
    • Google AI developers claim that their goal is to achieve a “safe and beneficial AI.” They emphasize safety through robustness. A robust II will be less susceptible to any adversarial inputs (i.e., prompts that are designed to manipulate AI behavior), and will reduce the biases in their datasets.

OpenAI (ChatGPT):

  • Goal:
    • OpenAI developers claim they are developing an AI that benefits humanity. While this sounds similar to Google’s goal for its AI, OpenAI believes it has a stronger emphasis than Google on mitigating existential risks we might get from a super-intelligent AI.
    • A point in OpenAI’s favor is the fact that they have a charter which explicitly references preventing the misuse of AGI and prioritizing safety.

To dive deeper into how each corporation is training and the methods they are using to implement their alignment programs requires more time, space, and technical knowledge than I have right now. But I am rather painfully aware that there is nothing in these somewhat flowery statements each corporation makes about its intentions towards AI development that makes me feel confidant they will achieve their goals. And we know what they say about “good intentions,” don’t we?

Having looked at both companies approaches to alignment I am at once struck by the thought, “Wouldn’t it be great if they were collaborating on alignment?” This is because each corporation has what appears a good approach to alignment, each address similar points, and combined I think they’d have an alignment program worth something more than two individual alignment programs.

Don’t hold your breath. These companies are competitors and nary the twain shall they meet! There will be little or no collaboration. And let’s face it. The efforts both company is putting into alignment, while critical and important, are not up to the challenge we humans have because we will have an AGI before any alignment program is ready for prime time!

The challenges both alignment programs face are daunting. Sit in a college philosophy ethics 101 class sometime and see how difficult it is to get any agreement on culturally subjective things like values and morals. It’s like herding cats! Trying to hone value statements and moral judgement in ways that an AI can grasp and adhere to requires time, effort, and finesse.

Interestingly, OpenAI employs a method to align its AI using reward modeling, allowing them to define complex reward functions for their AI doing the ”right thing,” which is a step above simple good/bad responses. Both companies claim they’re developing an AI that will be “safe” and “fair,” across multiple domains, which is no easy task!

In the end, we will have an AGI that may or may not be aligned with the values and morals we humans hold dear. But wait, this is just the USA and maybe Europe we’re talking about here. What is China doing with AI? North Korea? Iran? Russia? Our enemies list is growing and will not be bound by any safe or fairness doctrine we may create for us in the United States. And any discussion of what our enemies are going to do with AGI can lead quickly into a discussion of a dystopian reality ahead for us, or possibly even a human extinction event.

Remember the Fermi paradox? This is the name given to the apparent contradiction we find given that when we look at the billions upon billions of possible planets that could be inhabited by life in the universe, yet we lack any evidence to support this belief. In other words, if life is so common in the universe, how come we don’t see any sign of it? Or even put another way: “Where the fuck are the aliens?”

According to Sam Altman, CEO of OpenAI: “One of my top 4 explanations for the Fermi Paradox is that biological intelligence always eventually creates machine intelligence, which wipes out biological life and then, for some reason, decides to make itself undetectable.” As far out as it may seem, this is a possibility we humans had better consider. This is a rather dark thought, I know, but it makes a point because the risk of a human extinction event should scare the hell out of us. It should make us say, “Hey, wait a minute. We need to think some more about what we’re doing here, before making an AGI.” But we all know this isn’t going to happen, not in time anyway.

This doesn’t mean that efforts at alignment are futile. It just means we have to work harder and faster. Aligning the values of AI with human values is probably one of the most important tasks that either Google or OpenAI can do, so more power to them in their efforts. An alignment failure could have disastrous results. Let’s think  about it.

There are essentially two big problems we could run into. One is that the AGI, smarter than we are in everything, that can teach itself and think for itself, decides that he doesn’t want what we want. The other is that the AGI arrives, but it is in the hands of someone who doesn’t want what humanity want. Either way, it could be curtains for the human race! Trying to get ahead of the game, OpenAI has been playing with the idea of a rogue AGI and you can play with ChaosGPT here.

Of course, we’re all human, so the odds of all of humanity coming to some kind of agreement on how an AGI should be aligned, is not highly probable. This is a euphemism for “no fucking way!” As a result, we also have no idea what a useful regulatory agency would look like, or how we could possibly enforce any regulations it came up with.

Given the lack of regulations, the lack of a coherent picture of a future AGI that all parties in the industry could agree with, we are left with the predictable result: a technological arms race amongst ourselves and the rest of the world. Welcome to the AGI jungle!

The only way we can prevent this is to open a dialogue amongst all interested parties across the globe. This will take leadership. Sadly, history tells us that this is extremely difficult, if not impossible.

Like the refrain spoken frequently in Game of Thrones to describe the night, I know this post has been “dark and full of terrors”. Be that as it may. I meant this as a wake-up call of sorts, because the AGI is coming… soon!

But we cannot allow ourselves to despair. Be assured, rightly or wrongly, we will not be able to stop the AGI from arriving. It’s going to happen and we will all be here to witness it.  Even so, we can still shape how it enters our world. To do so demands global dialogue, unprecedented collaboration, and a shared fierce determination to NOT let technology outpace our wisdom. So, while the potential for disaster is as real as it gets, so is the potential for an era of progress that is unlike we have ever known. It’s a choice and the choice is ours to make.

A Call to Action

Don’t be a couch potato! Learn further about the potential dangers of AI at:

  • Future of Life Institute. This is a research institute focused on mitigating existential risks, including AI safety.
  • Partnership on AI. A non-profit with industry and academic members, working on responsible AI development and best practices.
  • AI Alignment Forum. A community hub for technical research and discussions on AI alignment.

Let your elected officials know that this is an issue you care about!

  • Find your representative at: govTrack.
  • Other countries – Search for your government’s official representative-finding web pages.
    • When searching for the right people, focus on those involved in science and technology policy committees. These officials are more likely to grasp the intricacies of AI.

And, if possible, support those organizations working towards responsible AI development:


Let us know what you think…


1 Comment

  1. Ron Kressman

    Great stuff Jeff. You are right it is upon us, sooner than we may think. I think it is quite scary. Ron


Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

More Like This

Related Posts


Jeff Drake

Retired IT consultant, world-traveler, hobby photographer, and philosopher.