I've seen second one in another's code and I suppose this length comparison have been done to increase code productivity. It was used in a parser for a script language with a specific dictionary: words are 4 to 24 letters long with the average of 7-8 lettets, alphabet includes 26 latin letters plus "@","$" and "_".
Length comparison were used to escape == operator working with STL strings, which obviously takes more time then simple integer comparison. But in the same time first letter distribution in the given dictionary is simply wider than a distribution of words size, so two first letters of comparing strings will be generally more often different, than the sizes of that strings. That makes length comparison unnecessary.
I've ran some tests and that is what I've found out: While testing two random strings comparison million times, second way is much faster, so length comparison seems to be helpful. But in a working project it works even slower in a debug mode and insufficiantly faster in a release mode.
So, my question is: why length comparison can fasten the comparison and why can it slow it down?
UPD: I don't like that second way either, but it had been done for a reason, I suppose, and I wonder, what is this reason.
UPD2: Seriously, the question is not how to do best. I'm not even using STL strings in this case anymore. There's no wonder that length comparison is unnecessary and wrong etc. The wonder is - it really tends to work slightly better in one certain test. How is this possible?