Select Page

(2024-08-26) Some AI Terms Everyone Should Know

Written by Jeff Drake
8 · 26 · 24

I have been so busy researching and writing my new mini-blog series, “Start Seeing Christian Nationalism,” that it is taking me a while to catch up with what has been happening with AI and  these days, robotics, because robots, specifically “humanoid” robots are suddenly “the rage.” How so much can change so fast is mind-boggling!

Before I dive into what will be some very cool topics on future AI stuff, I think I should recap some AI terms everyone needs to be familiar with if you want to keep up with AI.

Neural Network

You’ve probably heard of neural networks. Recently neural networks have been experiencing an explosion of advancements. They are usually described as an artificial brain, which allows AI LLMs (large language models) to do the amazing things they can do. The “network” as such, is a connected set of artificial neurons, like our brains have, thus the name, “neural network.”

These artificial neurons are best thought of as a software construction, much more of a mathematical model than a physical thing. Although inspired by biological neurons, they should not be confused with human neurons. Digital neurons are much simpler. While one network neuron has connections with one or more other network neurons, the strength of the connection between them is the result of the data used to train the network. Did I say “train”? Yes, neural networks are trained, not programmed.

You see, neural networks are a different animal, so to speak, and operate differently than traditional computers. For one thing, they are not programmed like traditional computers. There is no Assembler, no Fortan or Pascal, and no C++ (I’ve programmed in all 4 languages in my career in IT). Instead, today’s AI systems are “trained,” using things called “algorithms” with a method, called “Deep Learning.”

Deep Learning

The “Deep” in Deep Learning refers to the fact that an AI is trained in layers. Between the data received as input and the output produced by an LLM, for example, are many different electronic layers that are hidden from us users. Inputs are received by the input layer, assigned something called a “weight” which is used for figuring out what to predict or resolve, and then passed on to other neurons in different layers for further processing until it finally hits the output layer, where the prediction or resolution is produced.

During training, the neural net adjusts the weights assigned to the connections between the neurons which allows for something called “backpropagation.” You can think of this as something like a feedback loop. The neural net goes over the data repeatedly, and as it does so, it hones the data and produces a better result. Scientists have also learned that the more layers a neural network has, the more complex problems it can solve. And we don’t know why this is.

In a few years, very few programmers will be needed in the industry. Instead, we will need more people educated in Deep Learning.

Note that compilers, a required software feature of the traditional computers I used while working, are no longer required. Instead, we feed neural networks with data, copious amounts of data! The AI systems read this data, detect patterns from the data, and produce internal mathematical models that can be used for making both predictions and decisions.

But here’s the thing about neural nets – they are a mystery! Encapsulating the original “black box” scenario, scientists today do not understand how neural networks do what they do! They feed LLMs data and miraculously (almost) they produce outputs for us! For some still unknown reason, AIs get smarter the more data they consume. But that data has to be considered “quality” data. The good stuff. Again, scientists don’t know why this is the case, just that it happens. The more data an AI consumes, the more intelligent the AI becomes. And this worries scientists hoping to create the first AGI (artificial general intelligence) because they have literally exhausted all of the world’s data! That’s right, the entire internet, every digitized library book in the world, in lots of different languages, has been located, (sometimes called, “scrubbed”) and read and fed to today’s LLMs.

Transformers

And in 2017 the AI gods created Transformers, and the world was never the same again!

Indeed, 2017 is the birth year for the next industrial revolution, the AI revolution, which will be so much bigger than the first industrial revolution! The transformer is credited with making this possible.

Internal AI systems called “transformers” were invented to ferret out every piece of data an AI is fed. But transformers do more, as they were initially developed to help with NLP (natural language processing). Later they were enhanced to include something called a “self-attention mechanism.” Transformers provide a huge boost in AI performance. It isn’t quite accurate, but whenever I hear the word, “transformer,” I always think of Westworld, the TV series.

Self-attention is a mechanism that allows transformers to analyze the data being used as input and then weigh the importance of the inputs it is receiving, allowing it to provide much-needed focus when and where focus is needed. This is considered a major scientific breakthrough, one that transformed the industry (no pun intended, but I’ll take it!). Transformers are credited with being one of the main reasons AI has been growing so exponentially. And again, Transformers speak friggin’ English! They allow LLMs, for instance, to consume the vast amounts of data they require for Deep Learning.

In fact, neural networks are a subset of Deep Learning.

Algorithm

An algorithm is a term you can’t avoid hearing about when reading most anything about AI systems. An algorithm is simply a set of instructions or rules used by artificial intelligence to perform a task or solve a problem. What’s remarkable here is that an algorithm can be read by anyone as it is written in plain English. For example, here is an algorithm that could be used to tell an AI to find the largest number in a list (list provided to me by Claude):

  1. Start with a list of numbers and set the first number as the largest.
  2. Compare the current largest number with the next number in the list.
  3. If the next number is larger, update it as the new largest number.
  4. Repeat steps 2-3 until you’ve compared all numbers in the list.
  5. The number you have at the end is the largest in the list.

You don’t need to be a computer whiz to read this list and understand what is being requested. The example above is a simple 5 line example, of course. Be assured that algorithms can also be quite complex. And algorithms don’t stay in plain English, as they are eventually translated into a language AIs understand. Today that language is called Python (I love that name). And C++ is still used on the backend AI systems for performance reasons. Fortunately, we users can just talk to the AIs in our local languages.

Tokens

Note that each word in the algorithm above is called a “token.” The example shows us 63 tokens. Longer algorithms used by AIs like ChatGPT and Claude can range up to 500,000 tokens. Every word you include in the AI prompt you write is a token. Google and others are promising prompt windows that will allow for up to 1 million tokens and beyond!

When token limits are increased to take a million or more tokens, this means that you can feed an entire library of books to an AI like ChatGPT in a single prompt (they have ways of doing this) and then query the AI based on that data. The AI will know every damn thing contained in every single book its fed!

One great side-effect of this large token prompt capability is that it skirts around the current lack of something called, “long short-term memory.” that most AIs are missing today. Although the industry is working on resolving this issue (the latest ChatGPT has a form of long short-term memory), you can use this increased prompt size to literally cram an AI with a library of information and then ask questions about it. How cool is that?

It’s important to note that while not all algorithms are necessarily neural networks, all neural networks are in fact, algorithms.

Sheesh, I’ve written so much and not even discussed some of the new things I wanted to. In any event, I thought it was best to describe some of these terms so I don’t have to go over again when discussing new developments.

Here is what I will try to address in my next AI Notebook posts:

  • “Liquid networks.” That’s a cool name, I think. This will be the next generation of neural networks. Think faster, smarter, much faster, much smarter!
  • “Earth 2” by Nvidia. Nvidia is attempting to emulate the entire planet Earth digitally inside a massive computer system. This will be used for tackling huge problems like the climate crisis, disaster response, and for vastly reducing the amount of time it takes to get a new drug to market. Years of “trials” can be done in days.
  • “Neuromorphic Computing.” This is a new approach in computer design that aims to mimic the functions found in biological neural networks (i.e., brains). The human brain is perhaps the most energy-efficient piece of biology nature ever produced. Developers hope to emulate this digitally.
  • “Federated Learning.” This is a new, distributed machine learning method that allows for AIs to train on decentralized data instead of shared data. This then allows multiple people or teams to collaborate on their training efforts while keeping their data private. Think secure data.
  • “Edge AI.” This is related to all the talk these days about AI “agents.” It refers to an ability to run AI algorithms directly in individual devices, like your phone, for example, removing the need to interface with a cloud of some kind. Another important security development, I suspect.
  • “Quantum AI.” Yep, I’m talking about the marriage between AI and quantum computing. It’s already here folks, although it’s probably more appropriate to say it’s “just been born!”

There’s more, but I think this will provide me with some much-needed distractions from the political turmoil we are all experiencing these days. I hope you think so, too.

Join me, as we explore the future that is now!

Please follow and like me:

Let us know what you think…

Comments

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

More Like This

Related Posts

2024-06-21 AI Update!

2024-06-21 AI Update!

2024-06-21 AI Update! This past week, OpenAI added a new member to their board of directors: former US Army general and NSA Director Paul Nakasone. Nakasone was the longest-serving leader of US Cybercom. Up to last January, OpenAI had a ban on adding any military and...

read more

Author

Jeff Drake

Retired IT consultant, world-traveler, hobby photographer, and philosopher.