Googlebot (Googlebot/2.1) appears to crawl URL:s on a newly added sites in an order corresponding to the length of the URL:
.. GET /ivjwiej/ HTTP/1.1" 200 .. "Mozilla/5.0 (compatible; Googlebot/ ..
.. GET /voeoovo/ HTTP/1.1" 200 .. "Mozilla/5.0 (compatible; Googlebot/ ..
.. GET /zeooviee/ HTTP/1.1" 200 .. "Mozilla/5.0 (compatible; Googlebot/ ..
.. GET /oveizuee/ HTTP/1.1" 200 .. "Mozilla/5.0 (compatible; Googlebot/ ..
.. GET /veiiziuuy/ HTTP/1.1" 200 .. "Mozilla/5.0 (compatible; Googlebot/ ..
.. GET /oweoivuuu/ HTTP/1.1" 200 .. "Mozilla/5.0 (compatible; Googlebot/ ..
.. GET /oeppwoovvw/ HTTP/1.1" 200 .. "Mozilla/5.0 (compatible; Googlebot/ ..
.. GET /aabieuuzii/ HTTP/1.1" 200 .. "Mozilla/5.0 (compatible; Googlebot/ ..
I've seen this exact pattern on multiple (>10) totally independent sites, so the ordering is not just a random coincidence.
Just to avoid confusion: the crawling order can seem like a very minor detail in how the Googlebot operates. And yes it really is a minor detail, but nevertheless I want to understand the technical details of how the Googlebot crawls the net. And the crawling ordering is one such detail. If you believe that this piece of knowledge is "useless" that is totally fine, but please don't pollute this page with answers since your contribution won't be very helpful. Answers that are not helpful will be downvoted in accordance to the SO house rules.
My questions are:
- Have you (yes, you personally - not a blog you read, etc.) observed this crawling pattern?
- Is the crawling pattern officially documented by Google?
- What could be the reasons behind choosing this crawling pattern?
Please try to address all three (3) questions.