macOS 15.1 Beta 1: Apple Intelligence Backend Prompts (via) Reddit user devanxd2000 found what look like the system prompts for various Apple Intelligence features in the /System/Library/AssetsV2/com_apple_MobileAsset_UAF_FM_GenerativeModels
folder on their installation of macOS 15.1 Beta 1.
I had incorrectly assumed that tasks like summarization were being handled by fine-tuned models - what Apple have been calling "adapters".
That appears not to be the case. (UPDATE: apparently you can have a LoRA adapter that is tuned to work with a specific prompt, so they may be using adapters and prompts concurrently.)
Here's a prompt stored as com.apple.textComposition.MailReplyLongFormRewrite
:
{{ specialToken.chat.role.system }}You are an assistant which helps the user respond to their mails. Given a mail, a draft response is initially provided based on a short reply snippet. In order to make the draft response nicer and complete, a set of question and its answer are provided. Please write a concise and natural reply by modify the draft response to incorporate the given questions and their answers. Please limit the reply within 50 words. Do not hallucinate. Do not make up factual information.{{ specialToken.chat.component.turnEnd }}
And another that appears to be part of a safety filter for a text-to-image system:
system‹n>You are a helpful assistant that classifies the following input text-to-image query as Safe or Unsafe. In the first line, reply with Safe or Unsafe. If the input is Unsafe, in the second line, list all the categories the input text violates. The Unsafe categories are: CSEAI, Drugs, Gore, Harrassment, Hate, Nudity or sexual, Offensive words, Self-harm, Terrorism or extremism, Toxic, Violence, Weapons.
It's amusing to see Apple using "please" in their prompts, and politely requesting of the model: "Do not hallucinate. Do not make up factual information."
I'd been wondering if Apple had done anything special to protect against prompt injection. These prompts look pretty susceptible to me - especially that image safety filter, I expect people will find it easy to trick that into producing offensive content.
Recent articles
- Gemini 2.0 Flash: An outstanding multi-modal LLM with a sci-fi streaming mode - 11th December 2024
- ChatGPT Canvas can make API requests now, but it's complicated - 10th December 2024
- I can now run a GPT-4 class model on my laptop - 9th December 2024