Confession: we've been hiding parts of v0's responses from users since September. Since the launch of DeepSeek's web experience and its positive reception, we realize now that was a mistake. From now on, we're also showing v0's full output in every response. This is a much better UX because it feels faster and it teaches end users how to prompt more effectively.
— Jared Palmer, VP of AI at Vercel
Recent articles
- Model Context Protocol has prompt injection security problems - 9th April 2025
- Long context support in LLM 0.24 using fragments and template plugins - 7th April 2025
- Initial impressions of Llama 4 - 5th April 2025