If i want to only allow crawlers to access index.php, will this work?
User-agent: *
Disallow: /
Allow: /index.php
If i want to only allow crawlers to access index.php, will this work?
User-agent: *
Disallow: /
Allow: /index.php
You can use the Google Robots tool to checkout. I would never put any secret directories in the robots file as I would guess that a line like below would be as honey for certain spiders.
Disallow: /secret
Try swapping the order of Disallow / Allow:
User-agent: *
Allow: /index.php
Disallow: /
See this info from wikipedia:
"Yet, in order to be compatible to all robots, if you want to allow single files inside an otherwise disallowed directory, you need to place the Allow directive(s) first, followed by the Disallow, for example:"
http://en.wikipedia.org/wiki/Robots.txt
Still I wouldn't expect it to work too consistently
Yes, it will work. Here's the test result from the Google Webmaster Tool.
Url
http://www.example.org/index.php
Googlebot
Allowed by line 3: Allow: /index.php
Googlebot-Mobile
Allowed by line 3: Allow: /index.php
However, remember that with this configuration your site homepage won't be crawled unless the page is accessed with the full qualified path.
In other words, http://www.example.org/
is forbidden while http://www.example.org/index.php
is allowed.
If you want your homepage to be accessible, here's a better version of your file.
User-agent: *
Disallow: /
Allow: /index.php
Allow: /$