A few weeks ago, we had the chance to engage in a fascinating conversation with Peter Singer’s AI avatar—an experimental project built using his writings and public statements to provide answers that closely reflect his positions. The avatar, while compelling, left us wondering: what would the real Peter Singer say?
We don’t need to wonder anymore.
We recently spoke with the philosopher to discuss the ethics of artificial intelligence in education—a topic that sits at the intersection of his lifelong concerns about human welfare, justice, and rational thinking. True to form, Singer offered a measured, thoughtful take on the promises and risks of AI.
Why Create an AI Version of Yourself?
Singer shared that the idea didn’t originate with him, but with a conference attendee who offered to build it. The philosopher saw clear value: he receives a constant flow of emails from around the world, asking for his views on everything from animal rights to global poverty. The AI avatar allows him to respond—indirectly but meaningfully—to many more people than he could alone. Ethical concerns? Few, according to Singer. He’s more interested in making his views accessible than in protecting royalties or branding.
Still, not everyone is convinced. Singer mentioned that his Australian publisher worries the avatar might reduce book sales. Why buy a book when you can ask an AI for a free summary of Singer’s position on euthanasia or effective altruism?
Can AI Improve Ethical Decision-Making?
Singer sees potential—particularly in professions where ethical training is thin, like healthcare. He suggests that AI could help nurses, doctors, or psychotherapists identify moral dimensions they might overlook. “It’s not about getting definitive answers,” he noted, “but about learning which ethical questions to ask.”
This insight, he believes, applies just as well to educators. Teachers, too, deal with complex moral dynamics—especially when working with young people—and AI might assist them in reflecting more systematically.
What Is the Most Pressing Ethical Issue for AI in Education?
When asked what he considers the most urgent ethical concern in using AI in schools, Singer pointed to several:
- Privacy and data protection, particularly when students interact with AI systems.
- Bias and fairness, ensuring that AI doesn’t replicate or worsen existing inequalities.
- The human factor, especially the risk of replacing real teachers with cheaper AI tutors, potentially reinforcing educational divides where wealthier students have human interaction, and poorer ones are left with chatbots.
Still, Singer remains pragmatic. “We’re not in a perfect situation now,” he said. “In many countries, poorer students already lack access to quality education.” If AI can deliver personalized attention—even from a machine—it may be better than overcrowded classrooms with no individualized support.
AI in the Classroom: Supplement, Not Substitute
Although Singer retired from full-time teaching in 2024, he acknowledged he might have used AI as a supplemental tool had he stayed in the classroom. As for students, he raised a crucial ethical dilemma: to what extent should they use AI for writing papers or completing assignments?
He doesn’t offer simple answers but emphasizes the importance of maintaining the integrity of learning. Teachers, in turn, may need to adapt—perhaps by emphasizing oral assessments or getting to know their students’ unique voices.
The Bigger Picture: Energy, Intelligence, and Long-Term Risk
Singer also reflected on broader philosophical concerns—like whether highly rational AI systems might one day conclude that humans are net-negative for the planet. While he doesn’t consider this an imminent danger, he acknowledges the theoretical plausibility. It’s a reminder that building AI with ethical reasoning doesn’t just mean coding in human values—it means confronting the values we, ourselves, often sideline.
So, Who Did It Better—Peter Singer or His AI Avatar?
We asked the same questions to both. The answers were strikingly aligned. Singer himself laughed when we told him: “You’re basically on the same page.” Still, he admitted his AI version might be “a bit more cautious” than he is—a safer, more diplomatic version of the philosopher.
Perhaps that’s the point.
AI can amplify ideas, scale philosophical conversations, and offer guidance—but it can’t yet replace the ethical clarity, humility, and humanity of someone like Peter Singer.
And for now, we’re glad the real one is still here to guide us.
Curious how his AI answered some of the same questions?
Read our interview with Peter Singer’s chatbot here.
Leave a Reply