What is AI?

"Our intelligence is what makes us human, and AI is an extension of that quality." - Yann LeCun, Godfather of AI

Artificial Intelligence isn't a single technology. From traditional computing to the peaks of modern innovation, AI encompasses any system that mirrors human-like thinking.

AI Terms, Concepts and Analogies

The world of AI is like nested Russian dolls, each layer revealing a more specialized form. Machine learning, the practical powerhouse, drives your Netflix suggestions and guards your credit card from fraud. Deep learning takes inspiration from nature's greatest computer – the human brain – helping doctors spot diseases and cars recognize pedestrians.

Think of traditional AI as a seasoned actor following a script, executing tasks with precision and predictability. In contrast, generative AI like ChatGPT performs like an improv artist, creating fresh content from simple suggestions. This fundamental shift transforms AI from a simple calculator into a creative collaborator.

AI is not just ChatGPT

All AI is not generative AI:
  • AI: Systems that mimic human-like thinking

  • ML: Computers learning from patterns in data

  • Deep Learning: Brain-inspired networks processing complex information

  • Generative AI: Creates new content from learned patterns

  • LLMs: Massive models trained on human text

  • GPT: Text generator using transformer architecture

  • ChatGPT: Conversational interface for GPT models


Think of AI as a vast ocean, where ChatGPT is merely one bright fish swimming near the surface. While millions chat with AI daily, the field runs far deeper.

At its core, AI encompasses any system that mimics human intelligence. Machine learning, its most practical branch, powers everything from Netflix recommendations to fraud detection in your credit card transactions. Deep learning, a subset that processes information like our brains, helps doctors spot diseases in medical images and enables cars to recognize pedestrians.

ChatGPT belongs to a specific family: Large Language Models. These text generators represent just one branch of generative AI, which also includes systems that create images, music, and even scientific discoveries. Yet AI's reach extends beyond generation – it schedules flights, optimizes supply chains, and predicts weather patterns.

The next time you hear "AI," remember ChatGPT is like a spokesmodel for a vast enterprise.

Traditional AI is like a Play, Generative AI is like Improv

Understanding the shift from rule-based systems to generative intelligence


Traditional AI:

  • Relies on established rules, logic, and algorithms to handle repetitive tasks

  • Trained to perform a specific task with a finite set of possible outcomes

  • Requires curated, labeled, and structured data


Generative AI:

  • Is not reliant on established rules and can create new logic

  • Can handle a variety of tasks and create content on the fly... with the right prompt

  • Identifies patterns from data all around the web regardless of labels or structure

There are two performers on a stage. One holds a script, meticulously following each line, stage direction, and cue. The other stands alone with a microphone, ready to interact with the audience and create something new. This contrast perfectly captures the difference between traditional and generative AI.

Traditional AI performs like that script-bound actor. Every response, every action flows from carefully programmed instructions. These systems excel at repetitive tasks – scanning documents, processing payments, identifying objects in images – because they're following a well-rehearsed routine. They need their "lines" spelled out through structured data and precise rules.

But generative AI breaks free from this rigid framework. Like an improv comedian who builds scenes from audience suggestions, these systems create fresh content from simple prompts. They've learned patterns from vast amounts of data, enabling them to generate text, images, and code without step-by-step instructions.

The difference isn't just in their capabilities – it's in their fundamental approach to problem-solving. Traditional AI asks "What rules should I follow?" while generative AI asks "What patterns can I build upon?" This shift marks a profound change in how machines process information and interact with humans.

uxGPT: Mastering AI Assistants for User Experience Designers and Product Managers

cover to book "uxGPT"cover to book "uxGPT"

RECOMMENDED REsource

An essential read with practical strategies to harness AI Assistants to plan and brainstorm user experience and product management activities. By mastering these prompts within the design thinking process, you'll unlock new ways to streamline workflows and generate innovative solutions.

Machine Learning is not the same as Deep Learning

Simple Patterns vs Neural Networks: Understanding AI's Building Blocks


Machine learning is like button mashing:

  • Uses algorithms and models to make predictions and decisions

  • Identifies patterns and works well for structured, defined tasks

  • Relies on simpler, more straightforward techniques


Deep learning is like skilled gameplay:

  • Uses advanced computing and mimics human brain with artificial neural networks

  • Can adapt and learn with less human intervention

  • Can create new logic based on millions of data points

Every video game champion started as a button masher. Pressing random combinations to occasionally land powerful moves - a perfect metaphor for how machine learning began. Machine learning, like an eager rookie, works through brute force and predetermined patterns. Feed it data about house prices, and it'll spot connections between square footage and cost. Show it enough cat photos, and it'll learn to recognize whiskers and pointy ears.

But deep learning? That's playing at championship level. Picture artificial neural networks as a digital brain, processing information in layers, each building upon the last. While machine learning needs carefully labeled data - like a novice following a guide - deep learning creates its own playbook. It discovers patterns within patterns, transforming raw pixels into face recognition systems and random sounds into speech.

Machine learning serves as our reliable workhorse, excelling at specific tasks like spam detection or product recommendations. Deep learning, however, powers those jaw-dropping AI moments - generating artwork, engaging in conversation, or navigating self-driving cars through city streets.

The difference? Computing power and complexity. Machine learning can run on your laptop, making smart choices about email filters. Deep learning demands serious hardware. Both approaches matter, each shining in its own arena.

Attention is All You Need

How parallel processing transformed artificial intelligence


In 2017, eight Google scientists proposed a new model:

  • The Transformer model is made up of an encoder and a decoder

  • It evaluates all of the words in a sentence in parallel during encoding

  • Words are broken down into meaning, context and position and assigned a numerical vector that can be analyzed with probability algorithms

You are reading a book, but instead of understanding each sentence as a whole, you must read one word at a time, forgetting everything else. Frustrating, right? That's exactly how AI systems used to process language – painfully slow and remarkably inefficient.

Then in 2017, eight Google scientists transformed the field of artificial intelligence with an elegantly simple idea: what if AI could see all words simultaneously, just like humans do? They called their innovation the Transformer, and it changed everything.

Think of the Transformer as a master linguist with perfect peripheral vision. Rather than trudging through text word by word, it takes in entire sentences at once. Each word becomes a unique mathematical fingerprint capturing its meaning, context, and position. These fingerprints interact with each other, creating a rich tapestry of understanding.

The magic lies in its ability to learn from vast amounts of text. The more it reads, the better it becomes at predicting what words should come next – much like how humans learn language through exposure and practice.

This breakthrough sparked an AI renaissance, leading directly to the chatbots and language models we use today. The Transformer didn't just speed things up – it fundamentally changed how machines understand us.

ChatGPT is like type ahead search

Transformers (the “T” in GPT) generate words based on other words - like auto-complete

GPTs operate as:
  • Pattern detection at a scale beyond word by word

  • Autocomplete of Ideas and concepts of the language trained on.

  • Language models try to guess what the next word is based on mathematically probability


Think about the last time you typed a message on your phone. As you wrote "How are," your keyboard suggested "you" as the next word. ChatGPT works on this same principle, but at a much grander scale.

Language models like ChatGPT don't truly understand or think - they predict. They analyze vast amounts of text to determine which words most likely follow others. While your phone predicts one word at a time, ChatGPT predicts entire paragraphs by calculating the mathematical probability of word sequences.

This technology is less about artificial intelligence and more about pattern recognition. Just as your music app suggests songs based on your listening history, ChatGPT suggests words based on its training data. It's not creating new knowledge - it's recombining existing patterns in sophisticated ways.

Consider this: when you start typing "The weather is," your phone might suggest "nice" or "terrible." It makes this guess based on how often these words appear together in its database. ChatGPT does the same thing, but instead of single words, it works with concepts, ideas, and entire conversations.

The key point? ChatGPT is a mirror reflecting human knowledge, not a crystal ball predicting the future. Like financial advisors remind us about investments: past performance doesn’t guarantee future results. What makes ChatGPT and tools like it remarkable isn't its ability to think, but its unprecedented skill at identifying and reproducing patterns in human communication.

How does AI work?

Every digital interaction leaves a trace, like footprints in the sand. When you search online, post on social media, or shop for groceries, you're contributing to an endless ocean of data.

About AI Demystifying

You don’t have to be an expert to understand AI, just like you don’t have to be a mechanic to drive a car.

But it can be challenging to sort through the noise - and we need cartoons in our heads about how technologies work.

AI Demystifying is a place to begin sorting through the hype, unpacking foundational concepts and developing frames of reference for AI.

Process is a set of tools, not rules.

AI Demystifying is another UX How Tool from Method Toolkit LLC.

Logo for Ai Demystifying
Logo for Ai Demystifying
Explore More like AI Demystifying from UX How
CoDesign AI
illustration of subway mapillustration of subway map

Collaborating with AI and each other in building experiences.

blueprint style illustration of backpackblueprint style illustration of backpack
plaid patternplaid pattern
UX Designer Guide

UX and Product Designer insights for navigating design realities.

A collection of prompt engineering techniques for UX.

XD Prompts
About UX How and T. Parke

UX How is a set of UX & Product Design “How To” sites with insights, resources, and blueprints for Design, UX and AI.

T. Parke is the Director of UX How with prior experience at ESPN, Disney, and Alaska Airlines. He has previously been a design leader on projects for Rolling Stone, Microsoft, Nickelodeon, and Marvel.

There you go.

Logos for Disney, ESPN, Microsoft, Rolling Stone and Marvel
Logos for Disney, ESPN, Microsoft, Rolling Stone and Marvel