Telling the AI to "make it better" after getting a result is just a folk method of getting an LLM to do Chain of Thought, which is why it works so well.
Recent articles
- How often do LLMs snitch? Recreating Theo's SnitchBench with LLM - 31st May 2025
- Talking AI and jobs with Natasha Zouves for News Nation - 30th May 2025
- Large Language Models can run tools in your terminal with LLM 0.26 - 27th May 2025