Regarding the recent blog post, I think a simpler explanation is that hallucinating a non-existent library is a such an inhuman error it throws people. A human making such an error would be almost unforgivably careless.
Recent articles
- Notes from my Accessibility and Gen AI podcast appearence - 2nd March 2025
- Hallucinations in code are the least dangerous form of LLM mistakes - 2nd March 2025
- Structured data extraction from unstructured content using LLM schemas - 28th February 2025