I don't see a real difference between the two examples you provided.
What percentage of your programming time do you spend actually typing code? I expect 5% is an overestimate. When you take into account thinking time, testing, bug-hunting, consulting references, and in some cases, compiling, you see that saving a few characters don't make a tremendous difference. Especially if they're easy-to reach characters (alphabetic ones).
When it comes to actually reading code, extra characters don't consume much extra time at all. In fact, we've had a lot of practice reading English (many of us, anyway) and extra words can actually help us make sense of what we're looking at.
One of the reasons that BASIC has refused to die is that it takes so little effort to read. Compare these two:
For x = 1 To 10 Step 2
for(x=1; x<11; x+=2)
My point is that on the "micro" scale, when it comes to individual keywords and syntax, minor changes aren't going to make a big difference, and blindly trying to save characters is a bad idea.
It's the macro scale that counts. How many lines it takes to do something. How many functions you have to invent to do something, how much thought needs to go in something (worrying about array bounds, or empty strings, or simply solving problems that the language could solve for you). Languages that have this sort of "shorthand" are better.