The pre- and post-increment operators make much more sense if you consider them in the light of history and when they were conceived.
Back in the days when C was basically a high-level assembler for PDP-11 machines</flamebait>
, and long before we had the nice optimizing compilers we have now, there were common idioms used that the post-increment operators were perfect for. Things like this:
char* strcpy(char* src, char* dest)
{
/* highly simplified version and likely not compileable as-is */
while (*dest++ = *src++);
return dest;
}
The code in question generated PDP-11 (or other) machine language code that made heavy use of the underlying addressing modes (like relative direct and relative indirect) that incorporated exactly these kinds of pre- and post-increment and decrement operations.
So to answer your question: do languages "need" these nowadays? No, of course not. It's provable that you need very little in terms of instructions to compute things. The question is more interesting if you ask "are these features desirable?" To that I'd answer a qualified "yes".
Using your examples:
y = x;
x += 1;
vs.
y = x++;
I can see two advantages right off the top of my head.
- The code is more succinct. Everything I need to know to understand what you're doing is in one place (as long as I know the language, naturally!) instead of spread out. "Spreading out" across two lines seems like a picky thing but if you're doing thousands of them it can make a big difference in the end.
- It is far more likely that the code generated even by a crappy compiler will be atomic in the second case. In the first case it very likely will not be unless you have a nice compiler. (Not all platforms have good, strong optimizing compilers.)
Also, I find it very telling that you're talking about +=
when that itself is an "unneeded" way of saying x = x + 1;
.... After all there is no use case scenario I can think of for +=
that couldn't be served fine by _ = _ + _
instead.