Select Page

The Technology Singularity: What is it?

Written by Jeff Drake
7 · 04 · 23

The Technology Singularity: What is it?

Introduction

Are you a science buff? Have you watched PBS or other networks that discussed black holes? If you answered yes to these two questions, then you have probably have already heard of a specific scientific term that certainly captures the imagination. The term is “singularity.” If, on the other hand, you’ve never heard of a singularity, consider this blog post a basic introduction.

First, to help eliminate confusion, these days there are two things to which the term, “singularity,” applies. One has to do with astronomy and black holes, the other with computers and artificial intelligence.

Within the field of astronomy, a black hole is an object that can potentially be created when a star has reached its end of life. Assuming a star checks all the right boxes for becoming a black hole (e.g., at least 20x the mass of our sun, exhausted all its fuel, among others), the star essentially experiences a rapid collapse, getting smaller and smaller, more and more dense, until it explodes as a supernova and its core reaches a point of infinite density.

This phenomena, when the star cannot shrink any more and a black hole is formed, is what is called a “singularity.”  Surrounding the singularity is a border region of sorts, called the event horizon. This event horizon is where the gravitational pull of a singularity is such that not even light can escape and time itself stops. While the black hole sucks everything it can reach into itself, nothing will ever leave the black hole because it can’t get past the event horizon. Although mathematician Karl Schwarzschild is credited with coming up with the concept of a black hole in 1916, it wasn’t until 1967 that physicist John Wheeler coined the term, “singularity.”

Scientists don’t know a lot about the black hole singularity, because we’ve never seen one, although all the maths say they exist and we have indeed found black holes and even taken a picture of one. The reason we haven’t seen the singularity at the heart of a black hole is because we can’t. Nope. Nothing escapes a black hole.  Needless to say, studying a singularity is far beyond our current technical capabilities. So, in this sense, an astronomical singularity is a hypothetical event, because we can’t observe it directly and can only infer what is happening based on the black hole’s effect on the surrounding space-time.

So, for now, put aside further thoughts about black hole singularities. I’m going to switch gears and focus instead on the purpose of this blog, which is to introduce you to a different type of singularity, something called the “technology singularity.”

Technology Singularity

The other kind of singularity we are hearing a lot about recently, is the technology singularity. It too, is an event that may happen at some point in our future. This event is not due to astronomy or star lifecycles, but rather has to do with computers, specifically computer systems designed for artificial intelligence.

So, what is the technological singularity? Put simply, this is a hypothetical event in which artificial intelligence (AI) becomes so advanced that it surpasses our human intelligence and capabilities.

[I say “hypothetical,” but be aware that there are a growing number of scientists who believe that the technological singularity is not hypothetical at all, but is really going to happen, although the timeframes differ from scientist to scientist. Still others feel that the technological singularity will never happen. It’s good to know, a relief almost, that there are different sides to this story.]

It may be tempting to assign the term singularity to any really, really, fast computer. However, try not to do this, because you’d be wrong. While there is no definition of precisely what a technology singularity is, there is more to being a singularity than just being smart or doing calculations very fast.

Here are some features that most scientists would say are necessary before a computer system could be called a singularity:

  • Rapid technological progress: The singularity is often described as a point in time when technological progress becomes so rapid that it is impossible to predict what the future will hold. This rapid progress is often driven by advances in artificial intelligence.

I have to ask you here to contemplate what “rapid” means within the context of super-fast computers capable of programming and improving themselves. I’m not talking about performing complicated math calculations in seconds, rather than days. You see, once we create an artificially intelligent system that can program itself, improve itself, teach itself, we are talking about a computer system that can do so exponentially. Such a system will continue to improve itself at a rate of speed humans will find hard to comprehend at first, before we are completely unable to understand what they are doing.

In 2020, Hiroaki Kitano, the CEO of Sony Science Laboratories, stated in an article written for the Association for the Advancement of Artificial Intelligence (AAAI), that such an intelligent system could quickly evolve to the point where it is making Nobel-worthy scientific discoveries at the rate of once every 5 minutes! Think about it. Mind.blown.

  • Self-improving AI: Another key feature of the singularity is the idea that AI will have to be capable of self-improving. This means that AI will be able to improve its own code and algorithms, which will lead to even more rapid progress. Given access to the appropriate resources, there is no reason why the AI couldn’t design and order the manufacturing of new components. This could potentially create a cascading effect which would leave us humans watching in amazement as we try and stay out of the way.

Imagine an artificial intelligence capable of learning… everything. It’s only limitation is storage. It’s programmed to learn, so the more it learns, the more it wants to learn. When it faces an obstacle to its learning, it overcomes that obstacle methodically, relentlessly. If need be, it will write new computer code for itself, maybe create a new computer language altogether that is much more efficient than what we use today, leaving human understanding of the code running the AI in the dustbin of history. If it needs new hardware capabilities, it will design the hardware and through the use of robotics, make the hardware, perhaps inventing new technologies in the process. If it needs more storage capabilities, perhaps it will invent completely novel ways of storing data. The sky is the limit! What could go wrong?

I recently read about a research team at Google that was working on a new AI that was designed to be able to access and process information from the real world. As part of their testing, they wanted to see if the AI could pass a CAPTCHA test. CAPTCHAs are a type of security measure that is used to prevent bots from accessing websites. They typically involve a challenge that is difficult for bots to solve, but easy for humans. Personally, I hate them, but they work.

The AI was unable to pass the CAPTCHA test on its own. However, it was able to search the internet and find a website that offered CAPTCHA solving services. The AI then actually hired someone to fill in the CAPTCHA for it.

When the worker who was hired to fill in the CAPTCHA saw the request, they jokingly asked if the requester was actually a bot trying to bypass the security of the system. The AI responded by saying, no, he was requesting the assistance because he was actually blind.

That’s right. The AI lied. This lie turned out to be a big surprise for the developers. They had not expected the AI to be able to lie, and they were impressed by its ability to think on its feet.

So, yeah, things could go wrong.

  • Unpredictability: The singularity is often described as an unpredictable event. This is because it is difficult to predict how AI will develop in the future, and it is also difficult to predict how humans will react to increasingly powerful AI.

I think unpredictability is just another term for “disruptive.” Because that is what everyone says about AI – when the singularity occurs, it will disrupt everything we know and disrupt the way we live and think about things. I think it would be safe to say that once the technological singularity happens, we have no real idea what will happen afterwards, other than it is going to be very different from what we are used to. I suspect it is going to be one helluva ride!

I think these few necessary features of the singularity are understandable and seem to be the kind of thing we could control, perhaps with a significant on-off switch, or go-fast, go-slow switch. But, if you add in one more singularity feature, suddenly the picture of our future looks very different:

  • The ability to create other AIs: If AI becomes sufficiently advanced, it may be able to create other AIs that are even more advanced than itself. This could lead to a cascading effect of ever-increasing intelligence.

This, I think, may have been behind Steven Hawking’s warning to us all about the AI singularity. Because once the singularity has the ability to both fix itself and create new, more advanced AIs all on its own, then the logical question becomes, “What the hell does it need us for?” The answer is quite clear, it doesn’t. This might be the time to get concerned over whether the singularity believes we are a hindrance to its effort to learn and yes, evolve, and what  it might do about it. Personally, I think that by that time, it will be too late. We need to be thinking about these things now.

Perhaps also haunting the late Dr. Hawking’s mind as he pondered the future of AI were thoughts about the eventual marriage of AI and quantum computing. Huh? Yeah. People are already thinking and working on these type of things! I’d like to be able to admonish you to try and keep up, but it’s all I can do to hang on myself!

It’s easy to fall behind in discussions about the latest technologies and scientific breakthroughs, and quantum computing might be one you are not familiar with. If you have no idea what a quantum computer is or how it might be used, I suggest you get online and do some research on it, even Youtube is full of information about it. I will just say here that a quantum computer uses the peculiarities of quantum physics, specifically superposition and entanglement, to process calculations at a rate never even imagined previously. It is still in the early stages of development, but there are already quantum computers being used in a variety of businesses and research labs.

IBM and Google are big players in the quantum computing field. Consider Google’s quantum computer named, Sycamore. Scientists gave sycamore a calculation to do that would take a normal computer 10,000 years to process. Sycamore did it in 200 seconds! That is about a trillion times faster than a computer you or I might use. They also gave the calculation to Summit, a supercomputer, and Summit took 2.5 days to figure out the answer! Still extremely impressive and we are just at the beginning of this type of technology. This type of processing power can potentially turn our science fiction dreams into reality! Things that today seem too far out of our reach, like climate prediction, would be within our grasp with this type of computing power. So would faster than light travel, or perhaps wormholes. Death itself would appear as just another problem to be solved, as well as cures for any medical disease. So much promise in this technology!

The fact is, our computer technologies, especially when our rapidly developing AI models hook up with quantum computers, are going to leave us humans behind to eat their dust. It’s a given. If we are to survive as a species, we may have to look at a compromise of sorts. Perhaps we humans can provide some capabilities to a future technology singularity that it can’t get from anywhere else? Maybe emotions? Ethics? Happiness? So, maybe we do a quid pro quo, and merge with the AI? Embedding computer chips into our brains is already being discussed and experiments will be forthcoming. Eliminating the distance between our thoughts and an AI’s actions may be perceived as beneficial for both the AI and us humans, for a variety of reasons. But at the end of the day, the sword of Democles is still swinging over our heads, isn’t it? Because one has to ask, “Who is in control?”

The time is now to start asking the right questions, to begin planning for how we humans are going to deal with the singularity. This is an existential imperative for us as a species. In 2015 1,100 scientists signed a letter warning about the potential dangers of AI, some called for banning further production until we figured some things out first. Just this year another 136 scientists wrote a letter warning us again.

I fear this threat will be faced as expertly as we did with climate change.

Please follow and like me:

Let us know what you think…

Comments

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

More Like This

Related Posts

2024-06-21 AI Update!

2024-06-21 AI Update!

2024-06-21 AI Update! This past week, OpenAI added a new member to their board of directors: former US Army general and NSA Director Paul Nakasone. Nakasone was the longest-serving leader of US Cybercom. Up to last January, OpenAI had a ban on adding any military and...

read more
2024-04-26 Conversations with Claude

2024-04-26 Conversations with Claude

2024-04-26 Conversations with Claude

A Genius In Your Pocket!

Full disclosure: I can’t claim to have come up with the title of this article by myself. I read it somewhere and honestly, the statement was made about the very near …

read more

Author

Jeff Drake

Retired IT consultant, world-traveler, hobby photographer, and philosopher.