views:

704

answers:

11

Syntactic sugar, IMHO, generally makes programs much more readable and easier to understand than coding from a very minimalistic set of primitives. I don't really see a downside to good, well thought out syntactic sugar. Why do some people basically think that syntactic sugar is at best superfluous and at worst something to be avoided?

Edit: I didn't want to name names, but since people asked, it seems like most C++ and Java programmers, for example, frankly don't care about their language's utter lack of syntactic sugar. In a lot of cases, it's not necessarily that they just like other parts of the language enough to make the lack of sugar worth the tradeoff, it's that they really don't care. Also, Lisp programmers seem almost proud of their language's strange notation (I won't call it syntax because it technically isn't), though in this case, it's more understandable because it allows Lisp's metaprogramming facilities to be as powerful as they are.

+1  A: 

Possibly because it leads to confusion in programmers who don't know what is really happening behind the scenes, which could in turn lead to some inefficient or poorly written code.. Just a guess, I don't think it is a "bad thing" either.

Ed Swangren
+1  A: 

Syntactic sugar can either make your program more understandable, or less so. If you add syntactic sugar for trivial things, you just add cognitive burden, because the language becomes more complicated. On the other hand, if you can add syntactic sugar which somehow accomplishes to pinpoint a specific concept and highlight it, then you can win.

antti.huima
+3  A: 

Too much unnecessary sugar just adds bloat to the languages. I would name names but then I would just get flamed. :) Also, sometimes language employ syntactic sugar instead of doing a real implementation. For instance, there is a language that shall remain nameless whose "generics implementation" is just a thin layer of syntactic sugar.

BobbyShaftoe
Are you hating on both .NET AND Java in the same response? Bravo! ;-)
Outlaw Programmer
In a way, yes. :)
BobbyShaftoe
A: 

It's more typing and more layers of abstraction. I'd much rather use a language that is designed to have higher levels of abstraction then a language with syntactic sugar tacked on to do a poor job of imitating features other languages have built in.

Jared
What differentiates one language's syntactic sugar from another language's feature? You could argue that most natively compiled languages are just syntactic sugar on top of assembly. Similarly, OOP can be done in C, it's just not pretty.
Ryan Graham
some aspect of OOP, I mean
Ryan Graham
@Ryan-Graham, no I disagree. I don't think it is that relativistic. You could go to those extremes; however, I think when you introduce syntax that can be done with trivial code already in the language it is just syntactic sugar. Also, if it tries to do *too much*, something over an above,
BobbyShaftoe
then it may be bloat.
BobbyShaftoe
+6  A: 

Syntactic sugar causes cancer of the semicolon. Alan Perlis

It is difficult to reason about syntactic sugar if the reasoning takes place without reference to a context. There are lots of examples about why "syntactic sugar" is good or bad, and all of them are meaningless without context.

You mention that syntactic sugar is good when it makes programs readable and easier to understand... and I can counter that saying that sometimes, syntactic sugar can affect the formal structure of a language, especially when syntactic sugar is a late addendum during the design of a programming language.

Instead of thinking in terms of syntactic sugar, I like to think in terms of well-designed languages that foster readability and ease of understanding, and bad-designed languages.

Regards,

Kwang Mark Eleven
+1  A: 

See the Law of Leaky Abstractions - too much sugar and you just use it without understanding or knowing what is going on, and this makes it increasingly hard to debug if something does go wrong. It's not so much that "syntactic sugar" is a bad thing, just that a lot of programmers rely on it without really being aware of what they are shielded from, and then if the syntactic sugar runs into problems they're screwed.

Wayne M
+1  A: 

Personally, I've always found the term "syntactic sugar" ambiguous. I mean if you want to get technical, just about anything other than basic arithmetic, an if statement, and a goto is syntactic sugar.

I think what most people mean when they dismiss "syntactic sugar" is that a language feature makes something complicated overly simple. The most notorious example of this is Perl. But since I'm not a Perl expert, I'll give you an example of what I'm talking about in python (taken from this question):

reduce(list.__add__, map(lambda x: list(x), [mi.image_set.all() for mi in list_of_menuitems]))

This is an obvious attempt at making something simpler gone horribly, horribly wrong.

That's not to say I'm on the side of removing such features though. I think that such features just need to be used carefully.

Jason Baker
+5  A: 

Syntactic sugar can in some cases interact in unpleasant ways.

some specific examples:

The first is c# (or java) specific, Auto boxing and the lock/synchronized construct

private int i;
private object o = new object();

private void SomethingNeedingLocking(bool b)
{
    object lk = b ? i : o;
    lock (lk) { /* do something */ }
}

In this example the helpful lock construct which can use any object as a synchronization point, combined with autoboxing, leads to a possible bug. The lock is simply taken on a new boxed instance of the i each time. It is arguable that the lock construct is over helpful and that some other specific construct on which to lock would be better but certainly the combination is still flawed.

Multiple variable declaration and pointers:

long* first, second;

A classic bug (though easy to spot). The sugar of multiple variables won't fit with the pointer declaration.

Some constructs do not need other aspects of the sugar to cause issues, a classic example is the ++ operator. It neatly lets you avoid writing

i = i + 1;

A widely used construct (and one which itself has scope for bugs since you must remember to update both variables if you wish to change from using i). However since this is easy to embed within other expressions the issue of prefix and postfix rears its head. When used within a for loop this doesn't matter, the evaluation happens outside of any other evaluations, but used elsewhere it can be a source of confusion (since you may be embedding a very important aspect of the calculation (whether the current or next value should be used) into a very small and easily missed form.

All the above (except perhaps the lock/box one which the compiler really should spot for you) are cases where the usage may well be fine, or experienced programmers may think "that's perfectly clear to me" but the scope for confusion exists, certainly for novice programmers or those moving to a different syntax.

ShuggyCoUk
In C, operator ++ was originally not sugar but an optimisation
anon
fair point, C's sugar for assembly anyway j/k ;)
ShuggyCoUk
Taking a lock on a value type is not legal in C# (don't know Java). You would have to do something like this: lock((object)i) But then, that makes the bug obvious, doesn't it!
Jeffrey L Whitledge
sorry yes -I tried to simplify too much and eliminated that part. fixing now
ShuggyCoUk
Your point on the i++ / i = i + 1 syntactic sugar having consequences can be illustrated by this question: http://stackoverflow.com/questions/547668/why-isnt-our-c-graphics-code-working-any-moreWhere the change from x = x * y to x *= y caused some issues.
Esteban Brenes
@Esteban. I think that is simply confusion bout *what* the sugar does as opposed to complex interactions. But mixing in sugar with large complex expressions or statements is in deed risky unless the sugar is explicitly designed to make such things easier an safer.
ShuggyCoUk
It really would have been better if lock used a special type of locking object rather than giving every Object a sync block. It always irks when me when I have to construct a new Object() because I need a context on which to safely and privately lock.
Dan Bryant
@Dan ++ I agree
ShuggyCoUk
+1  A: 

Syntax, in general, makes a language hard to learn, let alone master. Therefore, the smaller the set of syntax, the easier it is to learn and to try to master. This is a major reason why many new languages borrow the syntax from popular, existing languages.

Also, while I can simply avoid learning certain features I'm not interested in for whatever reason, I'll eventually find myself reading someone else's code who does like that feature and then I'll need to go learn that feature just to understand their code.

C. Dragon 76
+2  A: 

I have always understood "syntactic sugar" to refer to any syntax added to an existing language that do not extend the capabilities of the language. Otherwise, anything less direct than binary machine language could be called syntactic sugar.

Even though they do not extend the capabilities of a language, they can still be very useful.

For example, LINQ is syntactic sugar because it doesn't add any new capabilities to C#3 that were not already possible in C#2. But to do the same thing as a simple LINQ expression in C#2 would take vastly more code to accomplish and be much harder to read.

Conversly, generics are not syntactic sugar, because you can do things with them in C#2 that were impossible with C#1, such as creating a collection class that can contain any value type without boxing.

Jeffrey L Whitledge
+2  A: 

Nonsense. C and Lisp programmers use syntactic sugar all the time.

Examples:

  • a[i] instead of *(a+i)
  • '(1 2 3) instead of (quote 1 2 3)
kotlinski