I recently spoke with the CTO of a popular AI note-taking app who told me something surprising: they spend twice as much on vector search as they do on OpenAI API calls. Think about that for a second. Running the retrieval layer costs them more than paying for the LLM itself.
— James Luan, Engineering architect of Milvus
Recent articles
- Claude Skills are awesome, maybe a bigger deal than MCP - 16th October 2025
- NVIDIA DGX Spark: great hardware, early days for the ecosystem - 14th October 2025
- Claude can write complete Datasette plugins now - 8th October 2025