I know these are real risks, and to be clear, when I say an AI “thinks,” “learns,” “understands,” “decides,” or “feels,” I’m speaking metaphorically. Current AI systems don’t have a consciousness, emotions, a sense of self, or physical sensations. So why take the risk? Because as imperfect as the analogy is, working with AI is easiest if you think of it like an alien person rather than a human-built machine. And I think that is important to get across, even with the risks of anthropomorphism.
— Ethan Mollick, in March 2024
Recent articles
- An Introduction to Google’s Approach to AI Agent Security - 15th June 2025
- Design Patterns for Securing LLM Agents against Prompt Injections - 13th June 2025
- Comma v0.1 1T and 2T - 7B LLMs trained on openly licensed text - 7th June 2025