Simon Willison’s Weblog

Subscribe

Quotations tagged homebrewllms in 2024

Filters: Type: quotation × Year: 2024 × homebrewllms × Sorted by date


We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and GPT-3.5 (e.g., phi-3-mini achieves 69% on MMLU and 8.38 on MT-bench), despite being small enough to be deployed on a phone.

Phi-3 Technical Report # 23rd April 2024, 3 am

Types

Years

Months

Tags