The Perils of Perceiving AI as Human

The Perils of Perceiving AI as Human

The rapid rise of artificial intelligence (AI) has sparked excitement, hope, and even fear. From self-driving cars to AI-powered personal assistants, these technologies are becoming deeply integrated into our everyday lives. But along with this integration comes a challenge—our tendency to treat AI like it is human. This habit, known as anthropomorphism, might seem harmless at first, but it can mislead us into overestimating what AI can do and underestimating the risks it poses. This article reflects my personal thoughts on this issue, highlighting the hidden dangers of seeing AI as more human than it really is.

When AI Feels Human, But Isn’t

Humans have a natural instinct to see personalities in objects, animals, and even machines. Think of how we name our cars or talk to our phones—this is just part of being human. But when we apply these same instincts to AI, it becomes more than a quirky habit; it can cloud our judgement. AI systems, including language models like ChatGPT, are brilliant at mimicking human speech and behaviour, but that does not mean they understand the words they use or the concepts behind them.

Consider the analysis of ChatGPT’s responses to W.B. Yeats’ poem “Blood and the Moon” [1]. The poem explores themes of leadership and national struggles—heavy topics that require deep understanding. While ChatGPT can generate convincing responses about values like social justice or fairness, it struggles to apply these ideas meaningfully to real-world complexities. In one instance, it gave vague and safe answers about the poem’s meaning, sidestepping any concrete moral insights.

This example reveals a crucial truth: AI can sound insightful without actually comprehending. It works within the limits of the data it’s trained on—essentially recycling human-created content. Its apparent intelligence is an illusion of pattern recognition, not genuine understanding. And this is where the real danger lies.

Why Seeing AI as Human is Risky

Projecting human traits onto AI might make it feel relatable, even trustworthy, but it distorts how we assess its abilities and risks. There are three key dangers tied to this:

  1. Exaggerated Expectations
    When people see AI as a thinking, feeling entity, they can expect too much from it. We might assume that AI understands human values like fairness or empathy, and we could begin to rely on it to solve complex issues—like improving public policy or delivering fair justice. But AI lacks the moral intuition and lived experience needed for such tasks. If we expect AI to handle sensitive or critical decisions, we risk placing too much trust in something that is not equipped for the job.
  2. Blindness to Biases
    Every AI system is built by humans and trained on data from the real world, which means biases inevitably sneak in. But if we see AI as a neutral friend or a wise counsellor, we might ignore the fact that its decisions are shaped by biased inputs. This can have serious consequences, especially when AI is used in areas like hiring, policing, or healthcare. Without a critical eye, we might allow these systems to reinforce stereotypes or unfair practices.
  3. Erosion of Accountability
    The more we treat AI as if it has its own intentions or motivations, the easier it becomes to shift responsibility away from the humans behind it. If an AI system makes a mistake—say, delivering a biased verdict in a legal case—people might say, “It was the AI’s fault,” instead of questioning the developers or users who set it up. This erosion of accountability can lead to a dangerous lack of oversight.

The Normalisation of AI’s Influence

Beyond the immediate risks of bias or misjudgement, viewing AI as human-like can reshape our relationship with these technologies in subtle but powerful ways. As AI systems become more embedded in critical sectors like governance, healthcare, and education, it’s easy to stop questioning their influence. We may gradually accept AI’s decisions without scrutiny, simply because it feels like a reliable “partner.”

This normalisation is troubling because it encourages passivity. If we assume AI is capable of independent, thoughtful judgement, we might stop asking tough questions about its limitations. This is especially risky when decisions have high stakes—like those involving human rights or life-and-death medical scenarios. AI may be an impressive tool, but it cannot and should not replace human judgement.

Embracing a Balanced Perspective

The solution isn’t to reject AI or fear it, but to approach it with a clear understanding of what it really is: a tool. A powerful, complex tool, but still just that. AI systems are designed by humans, and their effectiveness depends on how well we use and manage them. Treating AI as human distracts us from this reality and makes it harder to see where its true strengths and weaknesses lie.

If we avoid the trap of anthropomorphism, we will be better equipped to critically assess AI’s role in society. This means questioning how AI is developed, ensuring transparency in its design, and placing appropriate safeguards around its use. Instead of expecting AI to solve all our problems, we need to stay engaged and take responsibility for the decisions we delegate to these systems.

Conclusion: Staying Grounded in Reality

In a world increasingly shaped by AI, resisting the urge to see these systems as human-like is essential. AI may be sophisticated, but it lacks the depth of human experience, intuition, and morality. Treating it as a sentient entity leads to unrealistic expectations, blinds us to its biases, and makes it easier to shift accountability away from people.

At the end of the day, AI is here to serve us—not to think or feel like us. If we keep this distinction in mind, we will be in a much better position to harness AI’s potential while avoiding its pitfalls. The key is to stay vigilant and maintain a thoughtful balance. By doing so, we ensure that these technologies remain tools that enhance human life, rather than control or diminish it.

References

[1] P. Isackson, “Outside the Box: Is AI Lying About Its Values?,” Fair Observer, Aug. 03, 2024. [Online]. Available: https://www.fairobserver.com/business/technology/artificial-intelligence/outside-the-box-is-ai-lying-about-its-values/

Spread the love
Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *