Select Page

The Rise of Superintelligence: Our Future with AGI and ASI

Written by Jeff Drake
8 · 08 · 23

The Rise of Superintelligence: Our Future with AGI and ASI

Table of contents

  • Introduction
  • What is AGI?
  • What is ASI?
  • The Differences between AGI and ASI
  • The Implications of AGI
  • The Rise of Superintelligence
  • Conclusion

Introduction

I try to keep up with all the AI news that we are bombarded with every day, I really do, but it’s hard. There is just so much! You know it’s a very busy field when Youtubers are producing several videos each week that cover AI mind-blowing news in 24-48 hour chunks. That’s how fast this technology is advancing.

At the forefront of what is now being called the “new industrial revolution,” are two acronyms I have talked about before, and will continue to talk about until they happen: AGI (artificial general intelligence) and ASI (artificial super-intelligence). One reason I’m doing this is that these are two terms that we in the public are going to hear more and more about, so it’s really a good idea to know what they are, which can help us identify fact from fiction. Why care about this at all? Because AGI and ASI are going to affect almost every aspect of our lives, whether you want them to, or not!

What is an AGI?

According to various experts, artificial general intelligence (AGI) is a hypothetical type of artificial intelligence that would have the ability to perform any intellectual task that a human being can. This includes tasks that require common sense, reasoning, and creativity. Wikipedia says it this way, “An artificial general intelligence (AGI) is a hypothetical type of intelligent agent. If realized, an AGI could learn to accomplish any intellectual task that human beings or animals can perform. Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks.” (Note that “economically valuable tasks” simply refers to tasks that provide value to us humans.)

Going forward, I will be using AGI to mean the “alternative” to some kind of AI that could just “…learn to accomplish any intellectual task…” that human beings can perform. I’m going with the definition of AGI that calls it an autonomous system that “surpasses human capabilities.” I think people who believe an AGI will simply be an AI that is as good at human cognition should read a book from long ago titled, “Your God is Too Small,” because they really need to think bigger. Merely matching our intelligence will only be a brief moment in the AGI’s existence as it speeds past us, past our capabilities, past our intelligence. No, an AGI is going to be much more smarter than we could ever be. And I don’t say so lightly.

If you dive into the current material available on AGI, you’ll find most experts believe that this more advanced type of AGI will be the result. It makes sense, doesn’t it? The AGI will have access to a wealth of knowledge and data, more than any single human and will be capable of analyzing this data with incredible computer processing power far more faster than any human brain. Already, corporations are working on computer systems that process trillions of bits of data in seconds. But an AGI will not be an AGI just because it can process data faster than any supercomputer. It will be an AGI because it will have the ability to not only learn, but learn independently, learn from its mistakes, apply that knowledge to different tasks, and improve itself over time. This means the AGI is going to be smarter and better than humans at just about everything.

Think about this for a minute. How do you feel about this? I will admit to a mix of emotions on this topic. On the one hand, I’m excited, because I love the field, the science, the technology, and I see the promise of AGI; but on the other, it concerns me because the field appears to be advancing quickly past our capabilities to plan and prepare for an AGI. Already, I fear we are too late. Experts are now saying we will have an AGI within the next year! No way are we prepared for it.

AGI is not going to be your run-of-the-mill AI that’s built for a specific task or skill. And don’t think we’re going to achieve just one AGI. Who could be satisfied with just one? Oh no, there will be multiple AGIs created. Remember, an AGI possesses a general understanding of the world, that is better than us humans. It has the potential to think creatively, solve complex problems, and even generate new ideas. What kind of ideas will an AGI come up with? Good question. Let’s take a look.

Imagine a world five years from now, with AGIs that help us design unimaginably innovative solutions to complex problems; AGIs that find cures for diseases that have plagued humanity for centuries, and AGIs that make our lives easier in ways we can’t even yet fathom. The AGI is the kind of “disruptive” technology you hear about these days, involving technological advances that will revolutionize entire industries (that’s why they call AI the next industrial revolution), unlock scientific breakthroughs, and pave the way for a utopian future we’ve often dreamed about (cue the rainbows and unicorns). So much promise! But is the promise real?

It’s important to remember that AGI also comes with its fair share of challenges and concerns. As AGI becomes more intelligent and autonomous, questions about ethics, control, and the potential risks it poses start to emerge. We need to ensure that AGI is developed responsibly and in a way that aligns with human values. After all, we don’t want a real-life Terminator situation on our hands. (By the way, “alignment” is a popular topic recently within the AI community as they struggle to figure out how to handle very smart AIs. I will write more on alignment efforts soon.)

To allay these fears we need to put up some guard rails for AGI, some rules, some limits, and yeah, some ethics! This, of course, raises questions as to whose rules? Whose limits? Whose ethics? Let’s face it, put this technology into the wrong hands and the AGI game could go sideways very fast. Controlling any AI is not an easy thing, especially when the developers of these AIs keep telling us their AIs are continually “surprising” them by doing things they never thought they could do. We’re watching AI “emergence” in real time. This will continue to be the case every day with an AGI, once it arrives.

So, there you have it – AGI, the future of superintelligence. A future that may be only 1 year away, or whenever ChatGPT 5 is released, according to some. I’m not going to say the AGI has the potential to change the world as we know it, for better or worse, I’m saying that it will change the world as we know it, for better or worse. It’s a given. Whether we end up in a utopia or a dystopia, only time will tell. But one thing’s for sure, the rise of AGI is a fascinating journey that many of us are eagerly watching unfold. So, don’t sit idly by, buckle up and get ready!

Let’s move on and discuss ASI (artificial super-intelligence).

What is an ASI?

I think I should begin by telling you that an ASI is not the same as the Singularity, a subject I discussed “here”. An ASI is yet another hypothetical type of artificial intelligence that would be vastly more intelligent than any human. It will have the ability to learn and process information much faster than any human, and it would be able to solve problems that are beyond the ability of humans. Hmm. Sounds a lot like the AGI, doesn’t it? Well, it is like an AGI, but it’s an AGI on steroids. The AGI will meet and eventually exceed human capabilities, but the ASI will be able to outperform humans in every field imaginable, from the moment it is born. Humans, us, you and me, will be like ants to the ASI. So, in simpler terms, You can think of the AGI as similar to an overachieving human, while the ASI is most like an awe-inspiring superhuman.

The Singularity, however, is not a thing like an AI, it is a moment in time, an event, the kind of which will shake the world we live in. It is that moment in time when an AI is capable of improving itself exponentially. I think the evolution of AIs will go like this: AIs lead to AGI – leads to ASI – leads to (potentially) the Singularity. It seems that creating an AGI is a necessary step to creating an ASI and the ASI is a necessary step to create the Singularity. However, developing an ASI may not be the only way to generate a Singularity. What’s needed for the ASI evolutionary step to be skipped is if something other than ASI that would lead to the same type of exponential change. Do you have an idea on what that might be? If you said, quantum computers, get yourself a cigar! I’m not going to discuss the quantum computing variable here and now, but as you might guess, when you throw quantum computers into the AI mix, things will get complicated beyond our abilities to comprehend very quickly.

What can these super-intelligent machines do, you ask? Well, buckle up, because the possibilities are mind-blowing.

An ASI can process information at lightning-fast speeds, but they will be much more than mere calculators on meth. They will analyze vast amounts of data, identify patterns beyond human comprehension, and make scarily accurate predictions with ease. Think of an ASI as the Einstein machine world. With their superhuman intelligence, they will be able to solve complex problems, invent new technologies, and even make breakthroughs in science and medicine. Basically, they will outsmart us in every possible way.

Sounds amazing, right? That’s because it is. But take a breath, because if we are worried about controlling an AGI, how the hell are we going to control an ASI? Thoughts suddenly spring to mind, like, “Immense power doesn’t come without its own set of concerns”; With great power comes great responsibility.” You can insert your own cliché here. An ASI is going to blow our minds!

Is an ASI inevitable? Well, if it isn’t perfectly clear already, the ASI is the ultimate goal of AI – the pinnacle of intelligence where machines surpass human capabilities and open doors to a future we can only dream of. So yeah, the industry is going to make ASI happen eventually. To reach the ASI, the AGI is but a necessary stepping stone.

The Differences between AGI and ASI

Now, let’s talk about the scope of abilities. An AGI, with its human-like intelligence, can excel in a wide range of tasks. They can perform jobs that require cognitive abilities like understanding natural language, recognizing objects, and even driving cars. On the flip side, an ASI is not just limited to a specific set of tasks. It can excel in pretty much anything you throw at it. Need a brilliant mathematician? Done. Want a master artist? Piece of cake. The ASI can do it all, and then some. It’s like having a personal superhero (minus the cape and spandex). To sum it up, the AGI is like the brainiac cousin who can do all the intelligent stuff we can, while the ASI is the absolute genius who can do everything better, faster, and with a touch of superhuman awesomeness.

On a related topic: These days a lot of what I read on AI makes it sound as if AGI and ASI are single objects, they write about it as if there will just be “the one” AGI or ASI. Believe me, the work going on in AI is going to produce multiple AGIs and ASIs, just like we have AIs today. It’s not a one-and-done operation. The reasons for this are quite clear. First, both AGIs and ASIs are going to be highly complex systems and they will be very expensive. It will take collaborating teams of scientists and governments, probably, to make them. A possible downside of this is that there will no doubt be just a few of these major developers in control of the technology. Lastly, AGIs and ASIs will be highly versatile systems, used for a wide variety of purposes. We could well end up in a situation with multiple AGIs and ASIs, each with their own goals and objectives. What happens when two AGIs have competing objectives? Or objectives opposed to each other? I dunno. I suspect it could be interesting. Ya think?

But, hey, before we get too excited, let’s not forget about the ethical considerations and concerns that come with the rise of these superintelligences.

The Implications of AGI

Ah, the implications of AGI. Brace yourselves, folks, because we’re about to dive into the potential rollercoaster ride of positive and negative impacts this advanced form of intelligence can have on our lives. And trust me, it’s not all rainbows and unicorns.

Let’s start with the positive side of things. Picture a world where AGI enhances our decision-making processes, helps solve complex problems with ease, and boosts scientific discoveries at an unprecedented rate. Sounds amazing, doesn’t it? We could see breakthroughs in medicine, climate change mitigation, and even the eradication of world hunger. Plus, having an AGI could revolutionize industries, create new job opportunities and free up our precious time for more important things, like binge-watching our favorite TV shows guilt-free. Well, maybe not guilt-free, but you know what I mean.

But every coin has two sides. And when it comes to AGI, the negative impacts are as real as your local news at 6. One major concern is its potential to outsmart us mere humans. Say what? Yeah, outsmart. As in: Imagine a superintelligent machine deciding that it no longer needs us in the equation! Ha, someone please cue the Terminator theme song! Not only that, but an AGI could also exacerbate existing social inequalities, creating a bigger gap between the haves and the have-nots. (I hope to write more about the impact of AGIs on our US economy in the near future. My thoughts on this are still in early development, but I’m toying with thoughts of reinstituting an ancient Greek and/or Roman ideal, considered one of the primary building blocks that allowed them to live in so-called Golden Ages – the slave state. WTF? Did I get your attention? LOL. Mission accomplished!) And let’s not forget about privacy concerns. With such powerful intelligence, personal data could easily be misused or exploited.

So there you have it – the double-edged sword of AGI. While it holds incredible promise for improving our world, we must tread carefully and address the ethical ramifications. It’s like having a genie that can grant you any wish, but you better be careful what you wish for because there’s no telling if it’ll be our salvation or our downfall. We must learn from our past mistakes, collaborate to set standards and regulations, and guide our future AGIs towards a future that benefits us all.

But wait. There’s more! We have only navigated the rather treacherous sea of AGI implications just to find ourselves standing on the shores leading to the rise of superintelligence. What lays ahead?

The Rise of Superintelligence

Ah, the rise of superintelligence! It’s like watching an episode of Black Mirror, but without a vape and snacks. So, let’s dive right into the current progress in AGI and ASI research.

Will ASIs surpass our intelligence and eventually control us like puppets? Will they wipe out humanity and take over the world? Or will they just sit back and binge-watch Netflix like the rest of us? The truth is, we do not know. All I do know is that when someone builds either an AGI or an ASI, there has to be an on-off switch somewhere a human can get at it! Of course, either an AGI or ASI could figure a way around this, I am sure. Will we know if they do that, or will they hide that fact from us? What would you do if you were them? This is how we have to think when we consider AGIs and ASIs.

These questions may seem far-fetched, but believe it or not, they have been the subject of many debates and discussions within the scientific communities. Researchers are actively exploring the possibilities and limitations of these superintelligent machines in order to find ways to harness their power for the greater good. After all, who wouldn’t want a machine that could solve world hunger, eradicate diseases, and assemble IKEA furniture in minutes?

Conclusion

In conclusion, the rise of superintelligence is a fascinating and slightly terrifying journey and it is a journey we are currently experiencing! While both AGI and ASI hold tremendous potential, we must proceed with caution. The progress in AGI and ASI research has opened doors to a realm of possibilities, but we must address the concerns and ethical considerations that come along with it.

And with that, I’ll bid farewell to this journey into the world of AGI and ASI. I hope you enjoyed the trip so far. Until our next brain-busting exploration, stay curious, get informed!

Please follow and like me:

Let us know what you think…

Comments

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

More Like This

Related Posts

2024-06-21 AI Update!

2024-06-21 AI Update!

2024-06-21 AI Update! This past week, OpenAI added a new member to their board of directors: former US Army general and NSA Director Paul Nakasone. Nakasone was the longest-serving leader of US Cybercom. Up to last January, OpenAI had a ban on adding any military and...

read more
2024-04-26 Conversations with Claude

2024-04-26 Conversations with Claude

2024-04-26 Conversations with Claude

A Genius In Your Pocket!

Full disclosure: I can’t claim to have come up with the title of this article by myself. I read it somewhere and honestly, the statement was made about the very near …

read more

Author

Jeff Drake

Retired IT consultant, world-traveler, hobby photographer, and philosopher.