I not expert on robots.txt and ı have the following in one of my clients robots.txt
User-agent: *
Disallow:
Disallow: /backup/
Disallow: /stylesheets/
Disallow: /admin/
I am not sure about the second line. Is this line disallows all spiders?
I not expert on robots.txt and ı have the following in one of my clients robots.txt
User-agent: *
Disallow:
Disallow: /backup/
Disallow: /stylesheets/
Disallow: /admin/
I am not sure about the second line. Is this line disallows all spiders?
It's pointing to no path, so I would say that it does absolutely nothing
Disallow
requires a path, as there is no path, I would say it is invalid and as such ignored by the crawlers.
See the wikipedia page for general information or use the Google Webmaster Tools to generate a robots.txt
file for your website.
After googling a little bit I found some robots.txt checkers. All of them agree that there are errors in the above robots.txt
One thing to notice is that there should be an allow:
directive after all those disallow:
lines.
And the first line as mentioned by poke should require path.
This errors may be confusing Google spiders. I am fixing it an hope to see results in a few days.
For robots.txt checkers: