Claude: How to use system prompts. Documentation for the new system prompt support added in Claude 2.1. The design surprises me a little: the system prompt is just the text that comes before the first instance of the text “Human: ...”—but Anthropic promise that instructions in that section of the prompt will be treated differently and followed more closely than any instructions that follow.
This whole page of documentation is giving me some pretty serious prompt injection red flags to be honest. Anthropic’s recommended way of using their models is entirely based around concatenating together strings of text using special delimiter phrases.
I’ll give it points for honesty though. OpenAI use JSON to field different parts of the prompt, but under the hood they’re all concatenated together with special tokens into a single token stream.
Recent articles
- How often do LLMs snitch? Recreating Theo's SnitchBench with LLM - 31st May 2025
- Talking AI and jobs with Natasha Zouves for News Nation - 30th May 2025
- Large Language Models can run tools in your terminal with LLM 0.26 - 27th May 2025