Home / Technology / AI 'Thinks' or 'Knows'? Experts Warn of Misleading Language
AI 'Thinks' or 'Knows'? Experts Warn of Misleading Language
19 Apr
Summary
- Using mental verbs for AI can make machines seem more human than they are.
- News writers rarely pair AI terms with human-like verbs, study finds.
- Context is crucial, as 'needs' can describe requirements, not human traits.

Researchers are cautioning against the common use of human mental verbs when describing artificial intelligence, as this can create a misleading perception of AI capabilities. Language like 'AI thinks' or 'ChatGPT knows' can imply consciousness or intent where none exists, as AI systems operate by analyzing data patterns rather than forming beliefs.
Despite the potential for anthropomorphism, a study analyzing over 20 billion words from news articles found that news writers infrequently pair AI-related terms with these human-like verbs. The word 'needs' appeared most often with AI, but often in contexts describing technical requirements or passive actions, not human traits.
The researchers emphasize that anthropomorphism exists on a spectrum, and context is key. Phrases that seem to attribute human reasoning or awareness go beyond simple descriptions. Understanding these linguistic nuances is vital for professionals to accurately communicate AI's capabilities and the human responsibility behind these tools.