Finally, a replacement for BERT: Introducing ModernBERT (via) BERT was an early language model released by Google in October 2018. Unlike modern LLMs it wasn't designed for generating text. BERT was trained for masked token prediction and was generally applied to problems like Named Entity Recognition or Sentiment Analysis. BERT also wasn't very useful on its own - most applications required you to fine-tune a model on top of it.
In exploring BERT I decided to try out dslim/distilbert-NER, a popular Named Entity Recognition model fine-tuned on top of DistilBERT (a smaller distilled version of the original BERT model). Here are my notes on running that using uv run
.
Jeremy Howard's Answer.AI research group, LightOn and friends supported the development of ModernBERT, a brand new BERT-style model that applies many enhancements from the past six years of advances in this space.
While BERT was trained on 3.3 billion tokens, producing 110 million and 340 million parameter models, ModernBERT trained on 2 trillion tokens, resulting in 140 million and 395 million parameter models. The parameter count hasn't increased much because it's designed to run on lower-end hardware. It has a 8192 token context length, a significant improvement on BERT's 512.
I was able to run one of the demos from the announcement post using uv run
like this (I'm not sure why I had to use numpy<2.0
but without that I got an error about cannot import name 'ComplexWarning' from 'numpy.core.numeric'
):
uv run --with 'numpy<2.0' --with torch --with 'git+https://github.com/huggingface/transformers.git' python
Then this Python:
import torch from transformers import pipeline from pprint import pprint pipe = pipeline( "fill-mask", model="answerdotai/ModernBERT-base", torch_dtype=torch.bfloat16, ) input_text = "He walked to the [MASK]." results = pipe(input_text) pprint(results)
Which downloaded 573MB to ~/.cache/huggingface/hub/models--answerdotai--ModernBERT-base
and output:
[{'score': 0.11669921875, 'sequence': 'He walked to the door.', 'token': 3369, 'token_str': ' door'}, {'score': 0.037841796875, 'sequence': 'He walked to the office.', 'token': 3906, 'token_str': ' office'}, {'score': 0.0277099609375, 'sequence': 'He walked to the library.', 'token': 6335, 'token_str': ' library'}, {'score': 0.0216064453125, 'sequence': 'He walked to the gate.', 'token': 7394, 'token_str': ' gate'}, {'score': 0.020263671875, 'sequence': 'He walked to the window.', 'token': 3497, 'token_str': ' window'}]
I'm looking forward to trying out models that use ModernBERT as their base. The model release is accompanied by a paper (Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference) and new documentation for using it with the Transformers library.
Recent articles
- Trying out QvQ - Qwen's new visual reasoning model - 24th December 2024
- My approach to running a link blog - 22nd December 2024
- Live blog: the 12th day of OpenAI - "Early evals for OpenAI o3" - 20th December 2024