Plurk: Instant conversations using Comet (via) Plurk’s comet implementation sounds pretty amazing. They’re using a single quad-core server with 32GB of RAM running 8 Node.js instances to serve long-polled comet to 100,000+ simultaneous users. They switched to Node from Java JBoss/Netty and found the new solution used 10 times less memory.
Recent articles
- LLM 0.22, the annotated release notes - 17th February 2025
- Run LLMs on macOS using llm-mlx and Apple's MLX framework - 15th February 2025
- URL-addressable Pyodide Python environments - 13th February 2025