Why does Google use “Allow” in robots.txt, when the standard seems to be “Disallow?”
4th February 2012
My answer to Why does Google use “Allow” in robots.txt, when the standard seems to be “Disallow?” on Quora
The Disallow command prevents search engines from crawling your site.
The Allow command allows them to crawl your site.
If you’re using the Google Webmaster tools, you probably want Google to crawl your site.
Am I misunderstanding your question?
More recent articles
- llamafile is the new best way to run a LLM on your own computer - 29th November 2023
- Prompt injection explained, November 2023 edition - 27th November 2023
- I'm on the Newsroom Robots podcast, with thoughts on the OpenAI board - 25th November 2023
- Weeknotes: DevDay, GitHub Universe, OpenAI chaos - 22nd November 2023
- Deciphering clues in a news article to understand how it was reported - 22nd November 2023
- Exploring GPTs: ChatGPT in a trench coat? - 15th November 2023
- Financial sustainability for open source projects at GitHub Universe - 10th November 2023
- ospeak: a CLI tool for speaking text in the terminal via OpenAI - 7th November 2023
- DALL-E 3, GPT4All, PMTiles, sqlite-migrate, datasette-edit-schema - 30th October 2023
- Now add a walrus: Prompt engineering in DALL-E 3 - 26th October 2023