Defending web applications against dictionary attacks
22nd January 2004
Over at Reflective Surface, Ronaldo M. Ferraz discusses the usability of an authentication system that locks down an account for a certain period of time after three failed login attempts. Ronaldo sees this as a trade off between usability and security, but I see it more as an added security issue in that it allows malicious third parties to lock other user’s accounts armed only with their username.
The problem then is how best to defend web applications against brute force password guessing attacks without enabling denial of service attacks at the same time. The largest risk is from automated scripts that try every possible password until they get in. Identifying these attacks should be trivial—a real user could potentially fail a dozen or so times, but would be unlikely to try hundreds of combinations in quick succession. Assuming a malicious cracking attempt has been identified, what steps should a system take to foil the attack while still allowing the real user to access the site?
I can think of a few options, none of which seem like the ideal solution:
- Ban login requests from the attacker’s IP address. This introduces the usual problems with IP banning, namely the risk of banning a whole bunch of people indiscriminately but leaving the attacker free to skip the ban using open web proxies.
- Lock the user’s account and email them a warning of the attack and a special key needed to unlock the account again. This relies on the user having access to their email account when they next have a need to access the system. It also assumes that the user’s email account is secure, but since both the user’s password and the secret unlocking key will be required to access the system email security is of less importance (the user’s password is not sent with the unlock key).
- Send an automated alert to a system administrator so they can analyze the situation in real time and take any necessary action. This relies on administrators being available 24/7—hardly a safe assumption for most systems.
- After a certain number of failed attempts, challenge the user to “prove their humanity” with one of those obscured-text-as-image things. This comes with accessibility issues which have as yet been unresolved.
If anyone has any better solutions, please leave a comment.
More recent articles
- Gemini 2.0 Flash: An outstanding multi-modal LLM with a sci-fi streaming mode - 11th December 2024
- ChatGPT Canvas can make API requests now, but it's complicated - 10th December 2024
- I can now run a GPT-4 class model on my laptop - 9th December 2024