Releasing Common Corpus: the largest public domain dataset for training LLMs (via) Released today. 500 billion words from “a wide diversity of cultural heritage initiatives”. 180 billion words of English, 110 billion of French, 30 billion of German, then Dutch, Spanish and Italian.
Includes quite a lot of US public domain data—21 million digitized out-of-copyright newspapers (or do they mean newspaper articles?)
“This is only an initial part of what we have collected so far, in part due to the lengthy process of copyright duration verification. In the following weeks and months, we’ll continue to publish many additional datasets also coming from other open sources, such as open data or open science.”
Coordinated by French AI startup Pleias and supported by the French Ministry of Culture, among others.
I can’t wait to try a model that’s been trained on this.
Recent articles
- I can now run a GPT-4 class model on my laptop - 9th December 2024
- Prompts.js - 7th December 2024
- First impressions of the new Amazon Nova LLMs (via a new llm-bedrock plugin) - 4th December 2024