Contrast [Apple Intelligence] to what OpenAI is trying to accomplish with its GPT models, or Google with Gemini, or Anthropic with Claude: those large language models are trying to incorporate all of the available public knowledge to know everything; it’s a dramatically larger and more difficult problem space, which is why they get stuff wrong. There is also a lot of stuff that they don’t know because that information is locked away — like all of the information on an iPhone.
Recent articles
- Six short video demos of LLM and Datasette projects - 22nd January 2025
- DeepSeek-R1 and exploring DeepSeek-R1-Distill-Llama-8B - 20th January 2025
- My AI/LLM predictions for the next 1, 3 and 6 years, for Oxide and Friends - 10th January 2025