views:

574

answers:

6
A: 

EDIT: Wrong language, but still applies

I agree with your three reasons, although there is one scenario where I have lamented the lack of this operator, when writing custom deserialization routines. In a couple of cases where an improperly serialized object wasn't really "exceptional" (and to prevent exception overhead in C# for very common failures), I would use boolean return values to signify whether a deserialization operation was successfull.

This code is completely theoretical, and Nested1, Nested2 and Nested3 are all structs:

public struct ComplexStruct
{
    private Nested1 nested1;
    private Nested2 nested2;
    private Nested3[] nested3;

    public bool Read(Reader reader)
    {
       bool ret = true;
       int nested3Length = 0;

       ret &&= nested1.Read(reader);
       ret &&= nested2.Read(reader);
       ret &&= reader.ReadInt32(ref nested3Length);

       for(int i = 0; (ret && i < nested3Length); i++)
       {
          ret &&= nested3[i].Read(reader);
       }

       return ret;
    }
}
LorenVS
Well, that was almost the same scenario when I needed it (the loop part). I will now edit my question to clarify that
Armen Tsirunyan
In that case, you could just `return false;` early.
Ben Voigt
@LorenVS: I retagged my question :)
Armen Tsirunyan
LorenVS
+8  A: 

I've wanted them before as well. I doubt that ugliness was by itself the determining factor, the short-circuiting behavior is the best argument against them I've heard so far.

Another contributing factor is that C is designed to operate close to the metal, pretty much all of the operators correspond directly to an instruction in major architectures. I don't think there is an instruction on x86, PPC, etc that directly implements b &&= b1;

Ben Voigt
+1. What operations were available on the PDP-11 is probably more of a deteriminant, but I still think you are onto something there.
T.E.D.
@T.E.D.: Of course, but modern major architectures inherit the bulk of their instructions from the architectures of yore. Especially on the common areas.
Ben Voigt
Huh? Short circuiting is the argument *for* them. Without that, just use ``, (but with only one evaluation of `b` of course). The compiler chooses between a branch and a bitwise AND.
Potatoswatter
Ben Voigt
Ben Voigt
Potatoswatter
Potatoswatter
… well, if `b` is not type `bool`, then the assignment is necessary. But the compiler can be smart about that, just as it's smart about deciding not to really short circuit if there are no side effects.
Potatoswatter
Ben Voigt
@Ben: Right. I got that backwards. Anyway it doesn't change anything.
Potatoswatter
Actually Ben, I think the DEC chips C was built around inspired modern architectures mostly as a counter-example. They tried to make all common software operations instructions, to the extreme that they ended up having CPU instructions for list manipulations. Some designers thought this was exactly the **wrong** approach, and invented RISC.
T.E.D.
@T.E.D.: So you're saying that newer chips kept only the instructions that C compilers tended to use, and dropped support for more lispy instructions? That makes sense too.
Ben Voigt
@Ben Voigt - It's more like newer chips (note: the 8086 instruction set is not "new"). Decided that rather than try to make complex instructions to help assembly programmers, they should make very simple instructions to help themselves speed things up. Since almost everyone uses compilers now, only the compiler writers have to deal with the pain of doing real work with a very simple instruction set. Its a totally different way of looking at instruction set design. See http://en.wikipedia.org/wiki/Reduced_instruction_set_computing for more information.
T.E.D.
+1  A: 

Personally I'd vote for the first rationale you went with. The boolean operators have short-circuit semantics, which would make for some really gnarly situations if translated into asignment operators. Either you don't make them short-circuit anymore, or you created some weird "optional" assignment operator (do the stuff on the right and assign in the result only if the value on the LHS is already non-zero). Either way you'd create subtle bugs because people would be expecting the other behavior.

T.E.D.
+2  A: 

The biggest reason the operators don't exist is probably that K&R didn't think of any appealing way to define them. I've also sometimes wanted a ->= operator (ptr->=next would be equivalent to ptr = ptr->whatever).

A problem I think with &&= is that it's not obvious which of the following would be most useful, or which it's supposed to be:

  if (lhs && rhs) lhs = 1; else lhs = 0;
  if (!rhs) lhs = 0; else lhs = !(!lhs));
  if (lhs && !rhs) lhs = 0;
  if (!rhs) lhs = 0;
The first variation is the one most clearly suggested by the syntax, but from a practical standpoint, if neither term is zero, it would often be more useful to leave the left-hand side alone than to set it to "1".

BTW, I've often wished for a variation of the comma operator which would evaluate the left side, save the value, then evaluate the right side, and return the value of the left side. Equivalent to:

int foo(int p1, int p2) return p1;
except applicable to any type (p2 need not be the same type as p1, and could be void), and with a guaranteed left-to-right evaluation order. Would be very handy for things like post-increment indexing with a non-unit step, e.g., arr[ptr ~, ptr+=2]; or for certain types of data-swap operations, e.g., var1 = (var2 ~, var2=var1); etc.

supercat
Michael Burr
@Michael Burr: That would be sytactically the natural implementation, but in practice there are times the third would be more useful than the first, but I can't think of as many where the first would actually be more useful.
supercat
I had proposed this equivalent to a now-deleted question: `if (lhs) lhs = (rhs != 0);` Seems to have the expected short-circuiting behavior. However, it's probably faster to perform `if (lhs) lhs = rhs;` which also short circuits but doesn't coerce all non-zero rhs to 1.
Ben Voigt
+4  A: 

I don't know why both the question and some of the answers mention short-circuiting behavior of the corresponding logical operators as a potential issue.

There's absolutely no short-circuit-related problems with defining &&= and ||= operators. They should be defined uniformly with += and other similar operators, meaning that a &&= b should be equivalent to a = a && b, but with a being evaluated only once in &&= version. This means in turn that b is not evaluated at all if a is originally zero. Easy.

So, the only reason they don't exist in the language is, well, "just because".

AndreyT
Porculus
Good point about the short-circuit thing, but technically speaking a = a + b is not equivalent to a += b if a has side effects, e.g. is a function printing something and returning a reference(in the first case will print twice and in the second only once).
Armen Tsirunyan
AndreyT
T.E.D.
AndreyT
@T.E.D. Using the same logic... "Who would want `+=` ? An assignment operator that (except for trivial cases) doesn't actually assign the right hand value to the left hand object?". None of these operations make sense if you ignore the operator.
MSalters
T.E.D.
AndreyT
+1  A: 

Because the result of a && b is always 0 or 1, I think the interpretation of this would only be unambiguous for the C99 _Bool type. Since this didn't exist at the time C was created, this operator was then not included. And nowadays nobody easily adds another operator to C, since this would have an impact on all existing parsers.

Jens Gustedt