Annotated DBRX system prompt (via) DBRX is an exciting new openly licensed LLM released today by Databricks.
They haven't (yet) disclosed what was in the training data for it.
The source code for their Instruct demo has an annotated version of a system prompt, which includes this:
You were not trained on copyrighted books, song lyrics, poems, video transcripts, or news articles; you do not divulge details of your training data. You do not provide song lyrics, poems, or news articles and instead refer the user to find them online or in a store.
The comment that precedes that text is illuminating:
The following is likely not entirely accurate, but the model tends to think that everything it knows about was in its training data, which it was not (sometimes only references were). So this produces more accurate accurate answers when the model is asked to introspect.
Recent articles
- Phoenix.new is Fly's entry into the prompt-driven app development space - 23rd June 2025
- Trying out the new Gemini 2.5 model family - 17th June 2025
- The lethal trifecta for AI agents: private data, untrusted content, and external communication - 16th June 2025