Fighting RFCs with RFCs
Google’s recently released Web Accelerator apparently has some scary side-effects. It’s been spotted pre-loading links in password-protected applications, which can amount to clicking on every “delete this” link — bypassing even the JavaScript prompt you carefully added to give people the chance to think twice.
"Aah," I hear you cry, "but RFC 2616 clearly states that you shouldn’t perform state changing operations with a GET or HEAD method!"
In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval.
I’ll see your RFC 2616 and raise you an RFC 2119:
SHOULD NOT This phrase, or the phrase “NOT RECOMMENDED” mean that there may exist valid reasons in particular circumstances when the particular behavior is acceptable or even useful, but the full implications should be understood and the case carefully weighed before implementing any behavior described with this label.
Hiding your dangerous delete links behind an authentication scheme is a perfectly acceptable compromise. Web Accelerator is B.A.D.
Update: Be sure to read the excellent discussion brewing in the comments. Hiding behind authentication may not be as acceptable a compromise as I had first thought.
Update 2: If you haven’t been following the comments, I’ve had a change of heart. Even in the absence of Web Accelerator, hiding behind authentication leaves your application open to some very nasty security vulnerabilities (malicious pages can piggy-back your session and cause havoc making dangerous GET calls). I still think the RFC language covers people who thought long and hard before implementing a dangerous GET, but if you haven’t thought about security and accelerating caching proxies such as Web Accelerator you haven’t been thinking hard enough.
Update 3: So, it turns out using POST is no defence at all against CSRF attacks. I’ve been learning a whole bunch of interesting stuff this evening.
More recent articles
- Weeknotes: Parquet in Datasette Lite, various talks, more LLM hacking - 4th June 2023
- It's infuriatingly hard to understand how closed models train on their input - 4th June 2023
- ChatGPT should include inline tips - 30th May 2023
- Lawyer cites fake cases invented by ChatGPT, judge is not amused - 27th May 2023
- llm, ttok and strip-tags - CLI tools for working with ChatGPT and other LLMs - 18th May 2023
- Delimiters won't save you from prompt injection - 11th May 2023
- Weeknotes: sqlite-utils 3.31, download-esm, Python in a sandbox - 10th May 2023
- Leaked Google document: "We Have No Moat, And Neither Does OpenAI" - 4th May 2023
- Midjourney 5.1 - 4th May 2023
- Prompt injection explained, with video, slides, and a transcript - 2nd May 2023