How does AI work?

"Technology is a useful servant but a dangerous master" - Christian Lange

Every digital interaction leaves a trace, like footprints in the sand. When you search online, post on social media, or shop for groceries, you're contributing to an endless ocean of data. This vast sea of information forms the foundation of modern artificial intelligence.

AI Realities, Constraints and Assumptions

AI is not a genius inventor, but the world's most thorough student - one that studies billions of examples to spot patterns. Just as your brain learns to recognize cats by seeing many cats, AI systems learn by processing massive amounts of data. They don't memorize facts in a giant digital filing cabinet. Instead, they build intricate webs of connections, much like the neural pathways in your mind.

When you ask an AI a question, it's not searching through a database for the perfect answer. Rather, it's reconstructing knowledge from these learned patterns, like an artist creating a new painting inspired by every artwork they've ever seen. The result isn't perfect - sometimes details get fuzzy, just as a compressed photo loses some of its sharpness. But within these limitations lies remarkable capability, transforming raw data into meaningful insights that help us understand our world in new ways.

LLMs are like a JPEG of the Internet

AI compresses training data into meaningful patterns

LLMs (Large Language Models) can be thought of as a “lossy” compression of the web
  • Lossy doesn't have detail, depth, or resolution - it's fuzzy, but keeps important parts.

  • LLMs are lossy versions of the data they are trained on

  • By holding onto relationships and proximity, LLMs can make some meaningful output, such as text generation.


When capturing a sunset with your smartphone, the resulting JPEG your phone stores keeps the brilliant oranges and purples while discarding subtle gradients your eyes can't quite detect. Large Language Models (LLMs) perform a similar magic trick with the internet's vast knowledge – they compress, simplify, and reconstruct information on demand.

Think of an LLM as a student cramming for an exam. Rather than memorizing textbooks word-for-word, they grasp key concepts and relationships. When asked a question, they reconstruct an answer from these patterns, much like your brain pieces together memories of that sunset without remembering every exact detail.

Compression isn't perfect. Just as a JPEG might show subtle artifacts around sharp edges, LLMs can sometimes produce responses that seem almost right but miss crucial nuances. They capture the broad strokes of human knowledge while losing some precision along the way.

What makes this remarkable isn't just the compression itself, but how these models learn to reconstruct meaning. Feed them a prompt, and they'll weave together patterns they've observed, creating responses that feel surprisingly coherent and contextual. They're not simply replaying memorized text – they're performing a kind of pattern-based improvisation.

As we rush toward an AI-driven future, understanding these models as compression engines helps demystify their capabilities and limitations. They're neither omniscient oracles nor simple lookup tables – they're sophisticated pattern recognizers, turning the internet's vast tapestry into something more manageable, if occasionally imperfect.

Where do LLMs store data? They don’t

The surprising way AI stores (or doesn't store) information

No giant spreadsheet of facts:
  • Fundamentally different from traditional databases or tables that relate to each other

  • LLMs are vector databases and neural networks

  • A vector database is a series of nodes connected together - like a mind map


Recipe books are often neatly organized and sorted by categories like baked goods and desserts. Now imagine your own mind – a vast network of connected thoughts, where "apple pie" might link to autumn memories, family gatherings, and the smell of cinnamon. Large Language Models (LLMs) work like your mind, not like that recipe book.

Traditional databases are like that recipe book – organized tables with clear categories and relationships. But LLMs? They're more like a game of word association played at the speed of light, with trillions of weighted connections between concepts.

When you think "cat," your brain doesn't look up a definition in a mental dictionary. Instead, it activates a web of related concepts: fur, purring, whiskers, pets. LLMs work similarly, using what is called vector space. Imagine a vast cosmic web where related words cluster together, and distant concepts drift apart.

These neural networks don't memorize facts in tables. Instead, they learn patterns and relationships, building layers upon layers of connections. With hundreds of trillions of these connections (called parameters), they can recognize patterns in language just as your brain recognizes patterns in the world.

So next time you chat with an AI, remember: you're not talking to a super-powered search engine. You're conversing with a pattern-matching machine that, like your own mind, weaves meaning from a complex web of connections.

AI Reflects Our Biases

AI doesn't create bias - it inherits it


All data is human-made:

A child is learning about the world... exclusively from books written in 1850. That child would absorb the values, assumptions, and extent of knowledge from that era. AI systems work much the same way – they're learning from our past and present choices, complete with all our human flaws.

When we feed AI systems data about loan applications, hiring decisions, or medical diagnoses, we're not just giving them neutral information. We're passing along decades of human decisions shaped by social structures, economic systems, and cultural beliefs.

Think about facial recognition software that struggles with darker skin tones. The problem is that the humans who collected the training data didn't include enough diverse faces. The AI simply mirrors back what it's shown.

AI systems can only work with what we give them. They can't question whether lending patterns show racial bias or if medical data reflects gender disparities. They just learn the patterns we present.

The path forward isn't about making AI more ethical – it's about examining our own biases first. Because in the end, AI isn't creating these problems. It's just showing us who we are.

uxGPT: Mastering AI Assistants for User Experience Designers and Product Managers

cover to book "uxGPT"cover to book "uxGPT"

RECOMMENDED REsource

An essential read with practical strategies to harness AI Assistants to plan and brainstorm user experience and product management activities. By mastering these prompts within the design thinking process, you'll unlock new ways to streamline workflows and generate innovative solutions.

Garbage In, Garbage Out

AI’s hunger for information and the potential data drought


Clean public data available on the internet has likely reached a limit:

  • Running out of natural data may limit growth in AI capabilities

  • This is exacerbated by using much more training data to overtrain models

  • ”Synthetic” data may cause these models to to decay or stop workingUses algorithms and models to make predictions and decisions

The internet is running out of food for AI. Every day, AI systems consume massive amounts of online text, images, and data to learn and grow. But this growth comes at a cost: we're rapidly depleting the supply of high-quality public data.

Think of AI training as someone learning from books, conversations, and experiences. But what happens when they've read every book in the library? When they've heard every story? That's where AI stands today – it has nearly exhausted the pool of clean, reliable public data available online.

This matters because AI's outputs mirror its inputs. Poor data leads to poor results. The companies that built their AI models using pre-AI internet data may have accidentally secured an advantage – their systems learned from more authentic, human-generated content.

What comes next? The answer lies in our own backyards. Businesses sitting on treasure troves of internal data may hold the key to AI's future growth. But this shift from public to private data sources raises new questions about access, quality, and control.

AI is a time machine looking backward, not forward

The world’s most sophisticated echo is AI

Weather forecasters look through countless satellite images, temperature readings, and wind patterns. They're not guessing tomorrow's weather by magic – they're analyzing what happened before when similar conditions appeared. AI works the same way, but instead of cloud formations, it studies patterns in data.

Like a meticulous historian, AI systems digest vast archives of information. They spot patterns, find connections, and learn from every example they encounter. But here's the catch: they can only work with what they've seen before.

When new situations arise – a global pandemic, a revolutionary technology, or an unprecedented market shift – AI systems face the same challenge as our weather forecaster during a once-in-a-century storm. They can make educated guesses based on similar past events, but they're not equipped to truly predict the unprecedented.

This limitation isn't a flaw; it's the nature of pattern recognition. AI excels at finding connections in historical data, making it invaluable for tasks like image recognition or language processing. But when we ask it to peer into the future, we're really asking it to show us echoes of the past.

Understanding this distinction helps us use AI more effectively. We can trust it to recognize patterns we might miss, but we shouldn't expect it to be a crystal ball.

AI aren't getting much better, just more specialized

The future may belong to more focused and efficient models (or models on top of models)

New models are plateauing so what is happening now is specialization:
  • Need for models tailored to specific needs, not just all-purpose systems

  • Smaller models may make more sense - bigger isn’t always better

  • Better understanding of the platform leads to better value


The AI world is shifting from bigger to better. While tech headlines trumpet each new Large Language Model, a quieter revolution may be taking place with specialized AI winning the day.

Take a Swiss Army knife versus a chef's knife. A Swiss Army knife does everything, but a chef picks up their specialized blade for the perfect cut. Organizations - and individuals - need models tailored to their specific needs, not just massive all-purpose systems.

This shift makes sense. Current AI models still stumble over basic tasks – ask them to count the letters in "strawberry" and watch them falter. Companies are discovering that linking smaller, specialized models creates more value than one giant system trying to do it all.

Techniques to augment systems include RAG (Retrieval-Augmented Generation), which pairs AI models with specific databases. When ChatGPT says it's "searching the web," it's using this approach. Rather than trying to know everything, the system searches outside itself for what it needs.

The future of AI may not be about building bigger brains – it's about building smarter ones. Just as evolution favors specialized species in specific niches, AI development is heading toward focused, efficient models that do one thing exceptionally well.

How to Approach AI?

For every headline celebrating AI's capabilities, there's another warning about its limitations. But here's a simpler truth: AI works best not as an oracle delivering answers, but as a diligent partner offering possibilities.

About AI Demystifying

You don’t have to be an expert to understand AI, just like you don’t have to be a mechanic to drive a car.

But it can be challenging to sort through the noise - and we need cartoons in our heads about how technologies work.

AI Demystifying is a place to begin sorting through the hype, unpacking foundational concepts and developing frames of reference for AI.

Process is a set of tools, not rules.

AI Demystifying is another UX How Tool from Method Toolkit LLC.

Logo for Ai Demystifying
Logo for Ai Demystifying
Explore More like AI Demystifying from UX How
CoDesign AI
illustration of subway mapillustration of subway map

Collaborating with AI and each other in building experiences.

blueprint style illustration of backpackblueprint style illustration of backpack
plaid patternplaid pattern
UX Designer Guide

UX and Product Designer insights for navigating design realities.

A collection of prompt engineering techniques for UX.

XD Prompts
About UX How and T. Parke

UX How is a set of UX & Product Design “How To” sites with insights, resources, and blueprints for Design, UX and AI.

T. Parke is the Director of UX How with prior experience at ESPN, Disney, and Alaska Airlines. He has previously been a design leader on projects for Rolling Stone, Microsoft, Nickelodeon, and Marvel.

There you go.

Logos for Disney, ESPN, Microsoft, Rolling Stone and Marvel
Logos for Disney, ESPN, Microsoft, Rolling Stone and Marvel