views:

57

answers:

1

How is it possible that my page /admin/login.asp is found in Google with the query "inurl:admin/login.asp" while it isn't with the "site:www.domain.xx" query?

I've this line of code in my robots.txt:

User-agent: *
Disallow: /admin/

And this in the HTML code of the page:

<meta name="robots" content="noindex, nofollow" />

Any ideas?

A: 

You can check on Google Webmaster if the robots.txt is interpreted correctly by Google. You can also request the removal of a URL from the index there.

Fabian
Yes, GWT interpreted it correctly. If I look at "Crawl errors"" I even see a remark "URL restricted by robots.txt" for this page on "Jul 19, 2010"
waanders
Sure I can request a removal. But I was wondering why it's found. Now I've to ask for a removal AFTER somebody tried to hack (??) my site :-(
waanders
@waanders: request a removal *and* ask Google why it's still found.
Joachim Sauer
I did, my request is pending. How can ask Google that question?
waanders
Google has accepted my request, the page is no longer found with the "inurl" query. I'm still wondering why there's a difference between the "inurl:" and the "site:" command.
waanders