When Not to Trust AI
AI sometimes makes things up — confidently. Here's how to spot it, when it matters, and the simple habits that protect you.
AI is shockingly good. It’s also shockingly confident — including when it’s completely wrong. Knowing where it fails is half of using it well.
The thing called “hallucination”
When AI doesn’t know an answer, it sometimes invents one. It will make up a quote. Cite a book that doesn’t exist. Get a date wrong. Misremember a law. Hallucinate a court case. And it will say all of it with the same confident tone it uses when it’s right.
This is not a bug they can fully fix. It’s part of how the technology works — AI predicts plausible text, and sometimes plausible text is just wrong.
The fix isn’t to stop using AI. The fix is to know when to double-check.
The 4 categories: when AI is great, when it’s not
🟢 Trust it (mostly)
- Writing tasks — drafting, rewriting, summarizing, brainstorming. The output is yours to edit; correctness is up to you.
- Explaining concepts — physics, biology, philosophy, business ideas. It’s read more textbooks than you ever will.
- Code help for common tasks (most popular libraries, common patterns). It’s bad at obscure stuff.
- Translation of common languages. It’s surprisingly good.
- Structuring your thinking — pros/cons, frameworks, checklists.
🟡 Verify it
- Recent news, prices, schedules. Most AI tools have a knowledge cutoff date. Free ChatGPT might not know who won last week’s game.
- Specific dates, numbers, statistics. Spot-check the important ones.
- Names of real people in real contexts. It can mix up two people with similar names.
- Step-by-step instructions for software — interfaces change. The button might not be where it says.
🔴 Don’t blindly trust
- Legal advice for your specific situation. It can explain what a contract clause means in general; it cannot replace a lawyer reviewing yours.
- Medical advice and dosages. Useful for understanding what your doctor said. Not a replacement for your doctor.
- Financial advice for actual money decisions. It can explain a Roth IRA. Don’t use it to pick stocks.
- Citations and quotes. This is the #1 hallucination zone. If it cites a book, paper, or law, click through and confirm. Often the source doesn’t exist.
- Math with many digits. AI is famously bad at arithmetic. Use a calculator or ask it to “show your work.”
Five habits that protect you
1. Ask “how confident are you?”
After an answer, type:
On a scale of 1–10, how confident are you in that answer? What parts are you least sure about? Where might you be making things up?
A surprising number of times, the AI will say “actually, I’m not sure about X.” That’s gold. You learn what to verify.
2. Make it cite (and click)
Cite the specific sources for the factual claims above. If you don’t have a real source, mark it [unsourced].
Then check the sources. If a link 404s or the source doesn’t say what AI claims it says — that’s a hallucination. (This happens. Often.)
3. Cross-check the important stuff
For anything that matters — a fact in a presentation, a number in a report, a name in an email — Google it or ask another AI. Two tools rarely hallucinate the same thing.
4. Watch for over-confident vagueness
If an answer sounds great but you can’t pin down what it actually says, that’s a red flag. Real expertise has specifics. Fluffy expertise is often invented expertise.
5. Default to “draft, not done”
The mental model: AI gives you a draft. You review it. You edit. You ship.
If you ever publish, send, or rely on AI output without reading it carefully, you’ll eventually be embarrassed. Maybe sued. There are public examples — lawyers fined for citing fake court cases AI made up. Don’t be that person.
What about bias?
AI is trained on the internet. The internet is biased — by language, geography, culture, and who writes online. AI inherits all of that.
In practice: it’s better at English than other languages. It knows more about American culture than, say, Indonesian culture. It can quietly reflect the assumptions of whoever wrote most of its training data.
The fix is awareness. Ask:
Whose perspective is missing from your answer? What would a [different group] say differently?
You’ll often get useful counter-perspective for free.
The honest bottom line
AI is a brilliant intern. Brilliant interns produce great work and occasionally hand you something embarrassingly wrong with full confidence. Read what they give you before you sign your name on it.
That’s it. That’s the whole skill.
Where to go next
Get the next lesson