I think prompt engineering can be divided into “context engineering”, selecting and preparing relevant context for a task, and “prompt programming”, writing clear instructions. For an LLM search application like Perplexity, both matter a lot, but only the final, presentation-oriented stage of the latter is vulnerable to being echoed.
Recent articles
- Highlights from my appearance on the Data Renegades podcast with CL Kao and Dori Wilson - 26th November 2025
- Claude Opus 4.5, and why evaluating new LLMs is increasingly difficult - 24th November 2025
- sqlite-utils 4.0a1 has several (minor) backwards incompatible changes - 24th November 2025