How prompt injection attacks hijack today's top-end AI – and it's really tough to fix. Thomas Claburn interviewed me about prompt injection for the Register. Lots of direct quotes from our phone call in here—we went pretty deep into why it’s such a difficult problem to address.
Recent articles
- Run LLMs on macOS using llm-mlx and Apple's MLX framework - 15th February 2025
- URL-addressable Pyodide Python environments - 13th February 2025
- Using pip to install a Large Language Model that's under 100MB - 7th February 2025