views:

262

answers:

6

Does anyone have advice for using the params in C# for method argument passing. I'm contemplating making overloads for the first 6 arguments and then a 7th using the params feature. My reasoning is to avoid the extra array allocation the params feature require. This is for some high performant utility methods. Any advice? Is it a waste of code to create all the overloads?

+5  A: 

You can always pass Tuple as a parameter, or if the types of the parameters are always the same, an IList<T>.

As other answers and comments have said, you should only optimize after:

  1. Ensuring correct behavior.
  2. Determining the need to optimize.
Oded
+6  A: 

Don't even think about performance at this stage. Create whatever overloads will make your code easier to write and easier to understand at 4am two years from now. Sometimes that means params, sometimes that means avoiding it.

After you've got something that works, figure out if these are a performance problem. It's not hard to make the parameters more complicated, but if you add unnecessary complexity now, you'll never make them less so later.

egrunin
+17  A: 

Honestly, I'm a little bothered by everyone shouting "premature optimization!" Here's why.

  1. What you say makes perfect sense, particularly as you have already indicated you are working on a high-performance library.
  2. Even BCL classes follow this pattern. Consider all the overloads of string.Format or Console.WriteLine.
  3. This is very easy to get right. The whole premise behind the movement against premature optimization is that when you do something tricky for the purposes of optimizing performance, you're liable to break something by accident and make your code less maintainable. I don't see how that's a danger here; it should be very straightforward what you're doing, to yourself as well as any future developer who may deal with your code.

Also, even if you profiled the results of both approaches and saw only a very small difference in speed, there's still the issue of memory allocation. Creating a new array for every method call entails allocating more memory that will need to be garbage collected later. And in some scenarios where "nearly" real-time behavior is desired (such as algorithmic trading, the field I'm in), minimizing garbage collections is just as important as maximizing execution speed.

So, even if it earns me some downvotes: I say go for it.

(And to those who claim "the compiler surely already does something like this"--I wouldn't be so sure. Firstly, if that were the case, I fail to see why BCL classes would follow this pattern, as I've already mentioned. But more importantly, there is a very big semantic difference between a method that accepts multiple arguments and one that accepts an array. Just because one can be used as a substitute for the other doesn't mean the compiler would, or should, attempt such a substitution).

Dan Tao
I agree that optimising early isn't necessarily optimising prematurely, and it sounds like the OP probably has good reasons for doing this, but without knowing more about what's going on inside these methods it's difficult to know for sure: http://www.acm.org/ubiquity/views/v7i24_fallacy.html http://www.bluebytesoftware.com/blog/2010/09/06/ThePrematureOptimizationIsEvilMyth.aspx
LukeH
I couldn't have said it any better.
Jeff M
+6  A: 

Yes, that's the strategy that the .NET framework uses. String.Concat() would be a good example. It has overloads for up to 4 strings, a fallback one that takes a params string[]. Pretty important here, Concat needs to be fast and is there to help the user fall in the pit of success when he uses the + operator instead of a StringBuilder.

The code duplication you'll get is the price. You'd profile them to see if the speedup is worth the maintenance headache.

Fwiw: there are plenty of micro-optimizations like this in the .NET framework. Somewhat necessary because the designers could not really predict how their classes were going to be used. String.Concat() is just as likely to be used in a tight inner loop that is critical to program perf as, say, a config reader that only runs once at startup. As the end-user of your own code, you typically have to luxury of not having to worry about that. The reverse is also true, the .NET framework code is remarkably free of micro-optimizations when it is unlikely that their benefit would be measurable. Like providing overloads when the core code is slow anyway.

Hans Passant
Haha, "fall in the pit of success" -- I like that.
Dan Tao
A: 

You can try something like this to benchmark the performance so you have some concrete numbers to make decisions with.

In general, object allocation is slightly faster than in C/C++ and deletion is much, much faster for small objects -- until you have tens of thousands of them being made per second. Here's an old article regarding memory allocation performance.

Rei Miyasaka
A: 

My point is, if your method is capable of getting unlimited number of parameters, then the logic inside it works in an array-style. So, having overloads for limited number of parameters wouldn't be helping. Unless, you can implement limited number of parameters in a whole different way that is much faster.

For example, if you're handing the parameters to a Console.WriteLine, there's a hidden array creation in there too, so either way you end up having an array.

And, sorry for bothering Dan Tao, I also feel like it is premature optimization. Because you need to know what difference would it make to have overloads with limited number of parameters. If your application is that much performance-critical, you'd need to implement both ways and try to run a test and compare execution times.

Iravanchi
Yes thats a good point. The context in which I asked the question was in fact, that I am able to express a meaningful function when one or more arguments are provided.
Carlo V. Dango