We enhanced the ability of the upgraded Claude 3.5 Sonnet and Claude 3.5 Haiku to recognize and resist prompt injection attempts. Prompt injection is an attack where a malicious user feeds instructions to a model that attempt to change its originally intended behavior. Both models are now better able to recognize adversarial prompts from a user and behave in alignment with the system prompt. We constructed internal test sets of prompt injection attacks and specifically trained on adversarial interactions.
With computer use, we recommend taking additional precautions against the risk of prompt injection, such as using a dedicated virtual machine, limiting access to sensitive data, restricting internet access to required domains, and keeping a human in the loop for sensitive tasks.
Recent articles
- Initial explorations of Anthropic's new Computer Use capability - 22nd October 2024
- Everything I built with Claude Artifacts this week - 21st October 2024
- Running Llama 3.2 Vision and Phi-3.5 Vision on a Mac with mistral.rs - 19th October 2024