Big O Notation describes the limiting behavior of a function.
For computer science, typically, you use this to show how an algorithm scales well as you get larger sets of data. For example, lookups in collections typically vary, and are described using Big-O notation, based on the type of collection. A Dictionary<T,U>
in the .NET Framework has an Item property which is documented as follows:
Getting or setting the value of this property approaches an O(1) operation
That basically says, no matter how many items you add to the collection, getting an item out will be done in constant time. A List<T>
, on the other hand, describes its Contains method as follows:
This method performs a linear search; therefore, this method is an O(n) operation, where n is Count.
This basically says the algorithm will get slower, in linear time, as you add more items. If you put in 1000 items, it will take approximately 10x as long, on average, as a list containing 100 items, etc.