views:

1131

answers:

15

How would big O notation help in my day to day c# programming? Is this just an academic exercise?

+4  A: 

Naw, I was wondering that too, but now I find myself thinking about big-O just about every time I use a library.

Big-O lets you know the asymptotic running time of any function, that way you can decide whether data structure A is faster than data structure B for your purposes.

For example, you might be tempted to use something like an ArrayList when what you really need is a Queue. When you try to add an element to an ArrayList, if you can see that the running time is O(n) (because it needs to create a new array and copy all the elements over... sometimes) but in a Queue it's O(1) then you can easily see that the queue would be faster. This is actually kind of a poor example as there are many other differences between these two structures, but you get the idea ;)

Mark
Suggestion for a better example: suppose I need to keep track of a number of elements (say, I'm checking for duplicates). Should I use a HashSet or List? Both have Add and Contains methods, so both meet my purpose. But they scale in very different ways, and big-O notation tells me roughly what that scaling is. (This also illustrates that big-O is not the be-all and end-all since List may actually be *quicker* for small sets due to a smaller constant factor.)
itowlson
If the ArrayList is properly implemented than the amortized time of adding an element is still O(1) (although the worst case is indeed O(n)). To sum up - Computer Science is something you should study if you're going to call yourself a programmer.
Michał Bendowski
+25  A: 

Big-O tells you the complexity of an algorithm in terms of the size of its inputs. This is essential if you want to know how algorithms will scale. If you're designing a big website and you have a lot of users, the time it takes you to handle those requests is important. If you have lots of data and you want to store it in a structure, you need to know how to do that efficiently if you're going to write something that doesn't take a million years to run.

It's not that Big-O notation itself will help you. It's that if you understand Big-O notation, you understand the worst-case complexity of algorithms. Essentially, Big-O gives you a high-level sense of which algorithms are fast, which are slow, and what the tradeoffs are. I don't see how you can understand the performance implications of anything in, say, the .NET collections library if you don't understand this.

I won't go into more detail here, since this question has been asked many times, but suffice it to say that this is something you should understand. Here's a fairly highly voted previous Big-O question to get you started.

tgamblin
I'd just like to emphasise that taking a million years isn't a metaphor here...
Rich Bradshaw
+1 But I'd just like to add that anyone who's seen Swordfish knows that Big-O goes out the window when John Travolta has a gun to your head.
Joe Holloway
+1  A: 

No, it really helps to know what the efficiency of different algorithms are.

If you take the time to understand Big O, every time you sit down to code a loop, you'll be thinking "How can I make this more efficient?" - which is a good thing :)

Jacob Relkin
+7  A: 

Big O notation allows you to analyze algorithms in terms of overall efficiency and scaleability. It abstracts away constant order differences in efficiency which can vary from platform, language, OS to focus on the inherent efficiency of the algorithm and how it varies according to the size of the input.

Larry Watanabe
+3  A: 

Knowing what the relative strengths and weaknesses of different types of containers and sort algorithms helps you choose the correct one for the current situation. Big O notation is a convenient way to express the major difference, the algorithmic time complexity.

Paul Tomblin
+4  A: 

Big-O is important in algorithm design more than day to day hacks. Generally you don't need to know Big-O unless you are doing work on a lot of data (ie if you need to sort an array that is 10,000 elements, not 10). In a lot of cases, their are libraries that handle the tricky stuff for you (like a built in sort function), but in some cases you need to do it yourself.

Bottom line is that Big-O is fairly easy to learn, so just learn it. It will help you in a bunch of cases.

twolfe18
(+1) big-O is a waste of time, get Myers book on the C++ STL read it, and learn the differences between the complexity in the algorithms. Thats all you need to know. If you ever end up writing algorithms and data-structures you won't need the big-O notation to structure your thoughts -- profiling and hard-measurement >>>>> big-O
Hassan Syed
How are you going to fix your slow algorithm if you don't understand what the problem is? Profiling isn't going to tell you the complexity of your algorithm.
tgamblin
Are you serious Hassan? You'd rather remember running-times of specific algorithms on specific datasets than remember "linear" or "quadratic" complexity??? -1 to you sir! Complexity *is* Big-O!
Mark
+3  A: 

Writing good software is largely about understanding and making informed decisions about trade-offs in your design. For example, sometimes you can tolerate a larger memory footprint for faster execution time, sometimes you can sacrifice execution time for a smaller memory footprint and so on.

Big-O notation is a formalization of these trade-offs so that software engineers can speak a common language about them. You may never have to formally prove the Big-O characteristics of an algorithm you design, but if you don't understand the concept on an abstract level, then chances are you won't be making good trade-offs in the software you develop.

Joe Holloway
+1  A: 

Big-O is a means of measuring or meaningfully ball-parking the performance of an algorithm in terms of time. So if any optimization needs to be done in that respect, big-o is a valuable tool. It is a foundation chapter in algorithms and data structures classes. I agree with other replies mentioning that you might not use it directly in your day to day programming work, but even that day to day code has a performance that can be measured if required.

denchr
A: 

It's all about ignoring the constants.

mfx
-1. This is not helpful at all
Tamás Szelei
Why not? That's exactly what big-O is about: ignoring constant factors, such as what CPU you have, how many of them, what programming language you use, or what kind of operation you count. What's important is the asymptotic behaviour of your algorithm when the problem size increases, i.e. whether it's log(N), N, N log(N), N^2, N^k, or k^N.
mfx
Yes, but for someone trying to learn about big-O notation, the answer isn't helpful and will probably lead to _more_ confusion.
Chinmay Kanchi
+5  A: 

I am reading answers and I (seriously) think that big-O is underestimated.

As coders who make money from coding, we need to know what big-O is and why we need it.

Let me explain what I think: Big-O notation is the efficiency/performance of your work. You have to know how fast your code works when the inputs get bigger because in real life you can't know the exact number of inputs. Furthermore, you can't compare two different algorithmic approaches without an asymptotic notation so if you want to choose the better one, you are going to compare them with big-O and see which one fits your situation. Both may be inefficient but you will know which one is better.

PS: Actually I didn't expect such people saying "It's all about ignoring the constants.". That's why I may have overreacted. (and I am sorry for my bad-english)

rahmivolkan
+2  A: 

Yeah, it is just an "academic exercise". An be assured, as long as some stupid academics do such exercises you will be able to do a good programming job from day to day :-)

By the way, if these academics don't look at lambda calculus, graph theory, automatas, turing machines or something else, they find their shortes path to have dinner with philosophers.

For further information, have a look at a good academic book or at the excellent answers above ...

MartinStettner
+2  A: 

This is a question that (Almost) everyone asks during their CS studies, especially if they plan to be industrial developers.

As everyone here indicated, yes, it's critical. Although you might be able to evade it, or never care about performance, at some point you're going to be affected by it. At some point you will have to manipulate a lot of data in memory, and you will have to find a way to do it efficiently. You will have to choose between existing collections in some cases, and in others will have to design your own.

That being said, I have found that some schools push too much the mathematical/algebraic side to their undergraduates over the importance for real world use. Students who are less interested in this algebraic side develop a distaste. IMHO, there is no need for most CS students to know how to calculate Big O beyond the basics. Forcing things like the Masters theorem down their throat is not going to make them appreciate this.

Uri
A: 

Remember that big-O tells you how algorithms scale with large numbers of inputs, it doesn't tell you witch algorithm is faster for your task.

Building pyramids is O(n) while sorting pictures of them is, at best, O(n log n) it doesn't mean it's quicker to build them than make a slide shoow.

Martin Beckett
A: 

Think of efficiency, my friend!

The difference can be seen if your boss is yelling at you to find the address of clients by their name and you are given a huge pile of unsorted papers and an address book indexed by name!

In big-O notation, this is O(n) - running through your huge pile of unsorted paper, and O(1) - looking up the index by name.

mduvall
A: 

ia want to understand difference between big o and little o.tank you

mahdi