Fast groq-hosted LLMs vs browser jank (via) Groq is now serving LLMs such as Llama 3 so quickly that JavaScript which attempts to render Markdown strings on every new token can cause performance issues in browsers.
Taras Glek's solution was to move the rendering to a requestAnimationFrame()
callback, effectively buffering the rendering to the fastest rate the browser can support.
Recent articles
- Putting Gemini 2.5 Pro through its paces - 25th March 2025
- New audio models from OpenAI, but how much can we rely on them? - 20th March 2025
- Calling a wrap on my weeknotes - 20th March 2025