I have found a post http://stackoverflow.com/questions/999056/ethics-of-robots-txt/999088#999088 discussing a matter of robots.txt on web sites. Generally, I agree with the principals. However, there are commercial tools checking Google positions by - very likely - scraping Google for results, due to lack of API (in case someone doesn't know there used to be one).
Google's robots.txt disallow user agents to /search which means they don't want you to do this. So I am confused. Since there is no other way - at least as far as I know and many people (and companies) do this. Is this allowed or not?
Google TOS says:
5.3 You agree not to access (or attempt to access) any of the Services by any means other than through the interface that is provided by Google, unless you have been specifically allowed to do so in a separate agreement with Google.
However, I reckon RichieHindle must be right because the number of SEO tools that are using scraping techniques are endless.