Often, you are told to do this by treating AI like an intern. In retrospect, however, I think that this particular analogy ends up making people use AI in very constrained ways. To put it bluntly, any recent frontier model (by which I mean Claude 3.5, ChatGPT-4o, Grok 2, Llama 3.1, or Gemini Pro 1.5) is likely much better than any intern you would hire, but also weirder.
Instead, let me propose a new analogy: treat AI like an infinitely patient new coworker who forgets everything you tell them each new conversation, one that comes highly recommended but whose actual abilities are not that clear.
Recent articles
- Introducing Showboat and Rodney, so agents can demo what they’ve built - 10th February 2026
- How StrongDM's AI team build serious software without even looking at the code - 7th February 2026
- Running Pydantic's Monty Rust sandboxed Python subset in WebAssembly - 6th February 2026