We've adjusted prompt caching so that you now only need to specify cache write points in your prompts - we'll automatically check for cache hits at previous positions. No more manual tracking of read locations needed.
— Alex Albert, Anthropic
Recent articles
- How often do LLMs snitch? Recreating Theo's SnitchBench with LLM - 31st May 2025
- Talking AI and jobs with Natasha Zouves for News Nation - 30th May 2025
- Large Language Models can run tools in your terminal with LLM 0.26 - 27th May 2025