Cyber Bytes: 15 Artificial Intelligence Terms Explained

Artificial intelligence (AI) has dominated the headlines. Here are a few key terms you’ll encounter when reading or talking about AI.

Algorithms are detailed computational instructions that machines run on. They can describe ways to solve problems, perform tasks, and predict patterns like weather or behavior. Algorithms control how a device frames and processes data for its intended purposes.

Algorithmic bias describes the negative impacts of tools like AI when they draw from large datasets skewed by historical or selection bias. If the data inputs are biased, the outputs will be biased. Algorithmic bias is typically an unintentional byproduct of human programming. Businesses, legislators and watchdog groups are supervising AI and developing policies to correct algorithmic biases.

Artificial intelligence refers to computer systems, software or processes designed to mimic human tasks and reasoning. AI programs are trained on enormous datasets to accomplish tasks like analyzing speech, digitizing visual perception and learning. AI is not intelligent in the human sense but can process, learn and improve datasets. AI can aid humans in sophisticated ways.

Artificial general intelligence (AGI) is a theoretical form of AI that would be designed to perform any task a human could. It would be self-taught and able to complete tasks on its own without training. AGI doesn’t exist yet, but debates about how to ethically develop and classify it continue. For example, AGI could handle unfamiliar tasks, learn and secure its survival by applying its training data and coming to new conclusions. AGI is different from generative AI.

Generative adversarial networks (GANs) are pairs of neural networks designed to work against each other to create realistic outputs. They are trained on the same dataset. One neural network (the “generator”) generates new data. At the same time, the second neural network (the “discriminator”) evaluates whether the generator’s output is real (based on the original dataset) or fake (created by the generator). Each neural network learns and improves from these interactions. The goal is to make the generator so good that its creations fool the discriminator into thinking they are real. You could think of GANs as master forgers, constantly improving their techniques until their forgeries are indistinguishable from the real thing.

Generative AI can create things like text, images, video, audio and computer code in response to human queries or commands. Generative AI can quickly process data, making it an effective assistive tool for repetitive tasks and data analysis.

Generative AI is based on large language models. It depends on training data to make inferences, recognize patterns and generate unique outputs. Methods like GANs can be part of its training.

Generative AI isn’t conscious, despite its human-like tone and interactivity. Generative AI is not the same as AGI.

Data augmentation is a machine learning technique that generates more data from an initial pool of training data. AI systems were developed using a “model-centric” system. They collected massive amounts of data and trained algorithms to produce results.

Data augmentation requires large datasets to compare to be effective. This data collection, labeling and validation can be expensive. Not every industry has huge datasets to work with. For example, a health care company researching a cure for a rare disease could have trouble with a model-centric system if it doesn’t have enough data. Parsing big data is expensive and requires high processing power. It can also include bias and be vulnerable to cyberattacks.

But newer AI technologies are steering toward cleaner data from the start. Data-centric systems make data the focal point, not the processing algorithms or computer architecture. They aim to improve the data quality by choosing better labels, using complete and representative data, and minimizing data bias.

Data augmentation can be used for all types of data, including images. It can flip, crop and rotate images for training. These training techniques encourage the AI to learn from what it sees in pictures rather than memorize static images.

Data-centric technology seeks to correct the problem of “bad data in, bad data out” by training on superior datasets.

Deep learning knits algorithms together to mimic how a human brain processes information inside its neural networks. The algorithms form an artificial neural network that can independently learn and make intelligent decisions.

Deep learning is a subset of machine learning. It can recognize patterns with extreme complexity. It’s the tech behind realistic AI-generated images and voice synthesis.

Hallucination is a slang term for when AI produces inaccurate or illogical information in response to queries. People may prefer “confabulation” or “inaccuracies” to avoid parallels with human mental illness.

Large language models are AI systems that use written text patterns to generate responses. ChatGPT is an example of a large language model. It can use human speech patterns to create media that simulates styles of interactions and types of content.

Machine learning is an AI process. Machine learning identifies patterns in data to make decisions and predictions. The machine generates these decisions without explicit programming. Machine learning applications include face recognition, language translation and self-driving cars. For example, when your car’s automatic emergency braking system slams the brakes to avoid hitting a stopped vehicle, that’s machine learning.

Don’t use AI and machine learning interchangeably. One is a computer process and the other is a programmed tool or technology built on machine learning systems.

Neural networks are a series of algorithms that identify underlying relationships in a dataset. They’re like the digital version of the human brain’s neural network. GANs are a type of neural network used in generative AI.

Reinforcement learning is a type of machine learning where a machine learns to make decisions by trial and error. It’s rewarded for good choices.

For example, imagine a robotic arm on an assembly line. It should pick up a metal part, move it to a face plate and solder it together. At first, the robot doesn’t know how to perform the task. It starts by trying random movements, often failing, but occasionally getting it right. Every time it succeeds, it gets a reward. The robot uses this feedback to adjust its movements and learns the most effective way to place the part correctly and solder it. In training, it’s given obstacles to improve its ability to respond to distractions in the environment. Eventually, it completes the task, overcoming different environmental variables.

Training data is a dataset used to teach an algorithm or a machine learning model how to make predictions. The dataset can include anything, such as written text, images of faces, historical data, weather information, traffic patterns or arrest records. Training data is the basis of a machine’s intelligence. If the data input is limited or skewed, the output will be too.

Transfer learning is when a pre-trained machine is trained to perform a new or different task. It’s a way to customize an off-the-shelf model to fit your needs. It also makes learning quicker for AI and saves on computing resources.

For example, say you install ChatGPT using its pre-trained data. First, you fine-tune it with data specific to your industry, such as commercial insurance. Later, you pair each employee with their own ChatGPT to further train it to simulate the employee’s writing style. Over time, the generative AI model becomes more valuable and customized to the business and employee.

A pre-trained model can be more efficient than training from scratch. But some businesses might have intellectual property and data privacy concerns.

AI on trend

AI and its uses will continue transforming, as will the terminology. This list isn’t exhaustive, but it can give you a quick knowledge infusion to stay on trend.