Image created by Martin Ågren using the AI-model "DALL-E".
Written by Martin Ågren - March 18, 2025
Simple explanation: When AI generates incorrect information.
Technical explanation: GenAI model predicts tokens in a way that makes the output contain incorrect information.
Insufficient pre-training or lack of available information.
English – Usually safest to write in English, except when the topic is primarily discussed in Swedish.
Open-ended questions – Prevent AI from being misled or fixated on specific details.
Short conversations – Avoid excessive details; keep input and output concise.
Internet search – Press Search button or Instruct AI to search online: “Search multiple sites online and provide clickable source link.”
Paste reference text – Provide relevant text at the start of the chat and instruct AI to use it.
Upload files – Upload relevant documents for AI to reference.
Temperature setting – Lower temperature reduces overly creative responses.
Check source links – Hover over links; if they’re not clickable, the information may be unreliable.
AI self-checking – Ask AI to reflect on its response for errors.
Direct citations – Request exact word-for-word quotes from sources: “Make a citation by quoting word for word, from your source…”
Verify sources – Click source links and read content to confirm accuracy.
Experience – The more you use AI, the easier it is to detect hallucinations.
Bias awareness – Different AI chatbots may provide different answers due to bias.
Compare AI responses – Cross-check answers from multiple AI chatbots.
Your expertise – Errors are easier to spot in topics where you have strong knowledge.