views:

151

answers:

5

I've heard that the following features reduce debuggability (because they are anonymous and debuggers cannot trace it well)

  1. Anonymous Classes
  2. Inner Classes
  3. Closures Blocks / Lambda functions

Is this true?

+1  A: 

I would say this is decidedly untrue. Yes, without additional debugging support these constructs can be a bit more difficult to debug. In many languages they are not truly anonymous because the debugger doesn't understand language semantics. Instead it understands the final form of the program (the .exe and PDB combo). Most anonymous constructs eventually take concrete form in the final program (very true for .Net implementations).

Additionally languages that implement these features often take the time to implement better debugging support for them. Take C# and VB for example

  1. Both languages add DebuggerDisplay attributes and override .ToString on the anonymous types the generate to increase debugging support. The implementations differ a bit but the result is pretty much the same.
  2. Inner classes aren't very special in terms of debugging and don't require much if any additional work
  3. VB and C# spent a lot of time in Visual Studio 2008 to "unwind" lambda expressions and show the captured free variables as part of the original locals list. Makes it much easier to debug a function
JaredPar
+2  A: 

It's hard to say whether they inherently reduce debuggability. You can still print a stack trace if an anonymous function throws an exception. DrScheme manages to draw red arrows all over your code when something happens, to represent the stack trace, and that deals with anonymous functions just fine. However, nowhere near as much effort has been put into debugging a language like Scheme or Haskell as has been into Java with, for example, Eclipse, so of course the debugging tools are likely worse.

And, as JaredPar said, Visual Studio seems to do a good job with this and C#.

Claudiu
+1  A: 

The features you listed shouldn't cause problems for a debugger that's designed to handle them. If your debugger assumes you'll be debugging something fundamentally not too different from C, you might have issues.

Now, one feature found more often in functional languages that really does cause headaches for debuggers is heavy use of lazy evaluation. Haskell is particularly problematic in this regard.

camccann
A: 

I don't particularly believe this is the case, from my perspective. I'm using the functional features of Scala, which compiles to run on the Java Virtual Machine. Debuggers such as Intellij's work with this properly.

Having said that, some code constructs are presented in a different fashion to how you'd normally expect. Function blocks appear in some cases as inner classes. Lists appear as head entity plus a tail list (or it might be the other way around - I've only just started with this!).

Brian Agnew
+3  A: 

There are already some good answers regarding the particular features you have called out.

In general, I would say that some FP features, as well as aspect of programming in a more FP style, do at least 'interact' with the debugging experience. For example, using higher-order functions, one can program in point-free style. When you do so, this leaves fewer identifiers, which means e.g. fewer things that can easily be inspected in the 'locals' window of a debugger. Closures are typically opaque until you step into their bodies.

FP also uses lots of inversion-of-control constructs (lazy evaluation being just one, a 'map' or 'iter' rather than a 'foreach' being another), which changes the control flow and can impact how 'single-stepping' works.

As FP becomes more common, I expect that the debugging tools will continue to improve. It is unclear to me whether some FP is 'inherently' harder to debug, but even if that is true, don't forget that much about FP makes your code less likely to need debugging in the first place. :)

Brian