A common misconception about Transformers is to believe that they're a sequence-processing architecture. They're not.
They're a set-processing architecture. Transformers are 100% order-agnostic (which was the big innovation compared to RNNs, back in late 2016 -- you compute the full matrix of pairwise token interactions instead of processing one token at a time).
The way you add order awareness in a Transformer is at the feature level. You literally add to your token embeddings a position embedding / encoding that corresponds to its place in a sequence. The architecture itself just treats the input tokens as a set.
Recent articles
- Not all AI-assisted programming is vibe coding (but vibe coding rocks) - 19th March 2025
- Adding AI-generated descriptions to my tools collection - 13th March 2025
- Notes on Google's Gemma 3 - 12th March 2025