Back to Beginner Track
FREE 8 min read

When Not to Trust AI

AI sometimes makes things up — confidently. Here's how to spot it, when it matters, and the simple habits that protect you.

AI is shockingly good. It’s also shockingly confident — including when it’s completely wrong. Knowing where it fails is half of using it well.

The thing called “hallucination”

When AI doesn’t know an answer, it sometimes invents one. It will make up a quote. Cite a book that doesn’t exist. Get a date wrong. Misremember a law. Hallucinate a court case. And it will say all of it with the same confident tone it uses when it’s right.

This is not a bug they can fully fix. It’s part of how the technology works — AI predicts plausible text, and sometimes plausible text is just wrong.

The fix isn’t to stop using AI. The fix is to know when to double-check.

The 4 categories: when AI is great, when it’s not

🟢 Trust it (mostly)

🟡 Verify it

🔴 Don’t blindly trust

Five habits that protect you

1. Ask “how confident are you?”

After an answer, type:

On a scale of 1–10, how confident are you in that answer? What parts are you least sure about? Where might you be making things up?

A surprising number of times, the AI will say “actually, I’m not sure about X.” That’s gold. You learn what to verify.

2. Make it cite (and click)

Cite the specific sources for the factual claims above. If you don’t have a real source, mark it [unsourced].

Then check the sources. If a link 404s or the source doesn’t say what AI claims it says — that’s a hallucination. (This happens. Often.)

3. Cross-check the important stuff

For anything that matters — a fact in a presentation, a number in a report, a name in an email — Google it or ask another AI. Two tools rarely hallucinate the same thing.

4. Watch for over-confident vagueness

If an answer sounds great but you can’t pin down what it actually says, that’s a red flag. Real expertise has specifics. Fluffy expertise is often invented expertise.

5. Default to “draft, not done”

The mental model: AI gives you a draft. You review it. You edit. You ship.

If you ever publish, send, or rely on AI output without reading it carefully, you’ll eventually be embarrassed. Maybe sued. There are public examples — lawyers fined for citing fake court cases AI made up. Don’t be that person.

What about bias?

AI is trained on the internet. The internet is biased — by language, geography, culture, and who writes online. AI inherits all of that.

In practice: it’s better at English than other languages. It knows more about American culture than, say, Indonesian culture. It can quietly reflect the assumptions of whoever wrote most of its training data.

The fix is awareness. Ask:

Whose perspective is missing from your answer? What would a [different group] say differently?

You’ll often get useful counter-perspective for free.

The honest bottom line

AI is a brilliant intern. Brilliant interns produce great work and occasionally hand you something embarrassingly wrong with full confidence. Read what they give you before you sign your name on it.

That’s it. That’s the whole skill.

Where to go next

Lesson 07 — AI for People Who Hate Tech →

Get the next lesson

One email a week. The 5 things in AI worth knowing.