views:

321

answers:

2

i have never seen the usecase for preincrement and postincrement in actual code.

The only place i see them most often are puzzles.
My opinion is, it introduces more confusion rather than being useful.

  • is there any real use case scenario for this
  • can't this can be done by using +=

    y = x++

    y = x
    x += 1

+4  A: 

It's just a shorter way of writing the same thing and it's only confusing to those who don't grok C.

The same argument could be made for replacing:

for (i = 0; i < 10; i++)  {
    printf ("%d\n", i);
}

with:

i = 0;
while (i < 10) {
    printf ("%d\n", i);
    i = i + 1;
}

or ($DEITY forbid):

i = 0;
loop:
    printf ("%d\n", i);
    i = i + 1;
if (i < 10) goto loop

but (I'm hoping) you wouldn't do that, would you?

Or, invoking "reductio ad absurdum" mode, perhaps you'd rather change:

i = i + 10;

into:

i = i + 1; i = i + 1; i = i + 1; i = i + 1; i = i + 1;
i = i + 1; i = i + 1; i = i + 1; i = i + 1; i = i + 1;

:-)

paxdiablo
why not `for (i = 0; i < 10; i+=1)`
Anantha Kumaran
`void strcpy(char* a, char* b) { while (*a++ = *b++) ; }`
Travis Gockel
@Ananantha, you're missing the point. What I'm saying is that you would use "for" rather than "while" for that particular case (the i++/i+=1 distinction is irrelevant in that case). Bottom line: use the language features you have available to you. Understand them.
paxdiablo
@paxdiablo yes , i didn't read your answer carefully
Anantha Kumaran
+3  A: 

The pre- and post-increment operators make much more sense if you consider them in the light of history and when they were conceived.

Back in the days when C was basically a high-level assembler for PDP-11 machines</flamebait>, and long before we had the nice optimizing compilers we have now, there were common idioms used that the post-increment operators were perfect for. Things like this:

char* strcpy(char* src, char* dest)
{
  /* highly simplified version and likely not compileable as-is */
  while (*dest++ = *src++);
  return dest;
}

The code in question generated PDP-11 (or other) machine language code that made heavy use of the underlying addressing modes (like relative direct and relative indirect) that incorporated exactly these kinds of pre- and post-increment and decrement operations.

So to answer your question: do languages "need" these nowadays? No, of course not. It's provable that you need very little in terms of instructions to compute things. The question is more interesting if you ask "are these features desirable?" To that I'd answer a qualified "yes".

Using your examples:

y = x;
x += 1;

vs.

y = x++;

I can see two advantages right off the top of my head.

  1. The code is more succinct. Everything I need to know to understand what you're doing is in one place (as long as I know the language, naturally!) instead of spread out. "Spreading out" across two lines seems like a picky thing but if you're doing thousands of them it can make a big difference in the end.
  2. It is far more likely that the code generated even by a crappy compiler will be atomic in the second case. In the first case it very likely will not be unless you have a nice compiler. (Not all platforms have good, strong optimizing compilers.)

Also, I find it very telling that you're talking about += when that itself is an "unneeded" way of saying x = x + 1;.... After all there is no use case scenario I can think of for += that couldn't be served fine by _ = _ + _ instead.

JUST MY correct OPINION
+1 for a history level but I'm not sure that's flamebait, it really *was* a high-level assembler (not sure whether it was PDP8 or PDP11 though). And your last paragraph is a good point as well.
paxdiablo
Good answer, but I would argue that the semantics of unary operators vary *widely* from the semantics of the binary ones.
Travis Gockel
Using code like `x[f(y)][g(z)] += 10` could actually be more efficient than `x[f(y)][g(z)] = x[f(y)][g(z)] + 10` because the compiler only have to evaluate `x[f(y)][g(z)]` once in the first case, but possibly twice in the second case.
Gabe
@gabe: Absolutely correct. In languages like C, `x[f(y)][g(z)]` will be evaluated once at run time and return an lvalue, whose reference can be evaluated and then assigned, without having to step through a second evaluation. If `f` and `g` are impure functions, this can result in different behavior between the two forms you suggested. Yikes!
Travis Gockel
@gabe: A modern compiler will work that out for itself much of the time. (It's actually difficult to impossible to outperform a modern optimizing compiler with hand-made assembler these days.) @paxdiablo: C stems from PDP-11, not PDP-8 assembler for sure. @Travis G: The "multiple evaluation" thing was one of the first peephole optimizations made. The code from x++, x+=1 and x=x+1 even in older compilers would generate the same code (although more complex lvalues would require much better optimizers of course).
JUST MY correct OPINION
I stand corrected. It looks like it was _UNIX_ on the PDP-7, that they converted from assembler to C in the port to the PDP-11. Apologies.
paxdiablo
It's a complicated history. Easy to get wrong. :)
JUST MY correct OPINION
@ttmrichter: There are some situations where you *want* evaluation to occur twice (impure functions), so it is impossible for the optimizer to get rid of multiple evaluations if you never meant for them to occur. And in languages that allow operator overloading, then the single binary operator `+=` can have completely different execution than addition and assignment.
Travis Gockel
@Travis G: Of course. The optimizer can only optimize away that which it knows has no relevant side effects. Again, though, a modern compiler can know if f() and g() have side effects or not. If they don't, the code is the same. If they do, the code is different.This is, incidentally, why I tend to prefer coding in a "functional" style even when not using functional languages.
JUST MY correct OPINION
`x[f(y)][g(z)]` can only be safely optimized to be evaluated just once in the case of C++ where f and g are declared `const` or if you have a whole-program optimizer that can detect that f and g have no side-effects. The optimization is possible, but far from guaranteed! If f and g are external functions the compiler has no choice but to evaluate twice.
Gabe
I was thinking, in fact, of both cases. Programmer-annotated stuff like <code>const</code> declarations or whole-program (or module-level at least) optimization. If <code>f()</code> and <code>g()</code> are in the same source file as the <code>x[f(y)][g(z)]</code> expression the optimizer can deal with it even if it's not whole-program.
JUST MY correct OPINION
OK, how did you get the code stuff in your comment, gabe?
JUST MY correct OPINION
@ttmrichter: You just use backticks ``` like usual. And I think C++0x `constexpr` is one of the best additions to the language.
Travis Gockel