Are there any O(1/n) algorithms?
Or anything else which is less than O(1)?
Are there any O(1/n) algorithms?
Or anything else which is less than O(1)?
I believe quantum algorithms can do multiple computations "at once" via superposition...
I doubt this is a useful answer.
That's not possible. The definition of Big-O is the not greater than inequality:
A(n) = O(B(n)) <=> exists constant C, C > 0 such that for all n A <= C*B
So the B(n) is in fact the maximum value, therefore if it decreases as n increases the estimation will not change.
OK, I did a bit of thinking about it, and perhaps there exists an algorithm that could follow this general form:
You need to compute the traveling salesman problem for a 1000 node graph, however, you are also given a list of nodes which you cannot visit. As the list of unvisitable nodes grows larger, the problem becomes easier to solve.
If solution exists, it can be prepared and accessed in constant time=immediately. For instance using a LIFO data structure if you know the sorting query is for reverse order. Then data is already sorted, given that the appropriate model (LIFO) was chosen.
O(1/n) is not less then O(1), it basically means that the more data you have, the faster algorithm goes. Say you get an array and always fill it in up to a 10100 elements if it has less then that and do nothing if there's more. This one is not O(1/n) of course but something like O(-n) :) Too bad O-big notation does not allow negative values.
What about this:
void FindRandomInList(list l)
{
while(1)
{
int rand = Random.next();
if (l.contains(rand))
return;
}
}
as the size of the list grows, the expected runtime of the program decreases.
From my previous learning of big O notation, even if you need 1 step (such as checking a variable, doing an assignment), that is O(1).
Note that O(1) is the same as O(6), because the "constant" doesn't matter. That's why we say O(n) is the same as O(3n).
So if you need even 1 step, that's O(1)... and since your program at least needs 1 step, the minimum an algorithm can go is O(1). Unless if we don't do it, then it is O(0), I think? If we do anything at all, then it is O(1), and that's the minimum it can go.
(If we choose not to do it, then it may become a Zen or Tao question... in the realm of programming, O(1) is still the minimum).
Or how about this:
programmer: boss, I found a way to do it in O(1) time!
boss: no need to do it, we are bankrupt this morning.
programmer: oh then, it becomes O(0).
You can't go below O(1), however O(k) where k is less than N is possible. We called them sublinear time algorithms. In some problems, Sublinear time algorithm can only gives approximate solutions to a particular problem. However, sometimes, an approximate solutions is just fine, probably because the dataset is too large, or that it's way too computationally expensive to compute all.
sharptooth is correct, O(1) is the best possible performance. However, it does not imply a fast solution, just a fixed time solution.
An interesting variant, and perhaps what is really being suggested, is which problems get easier as the population grows. I can think of 1, albeit contrived and tongue-in-cheek answer:
Do any two people in a set have the same birthday? When n exceeds 365, return true. Although for less than 365, this is O(n ln n). Perhaps not a great answer since the problem doesn't slowly get easier but just becomes O(1) for n > 365.
Which problems get easier as population grows? One answer is a thing like bittorrent where download speed is an inverse function of number of nodes. Contrary to a car, which slows down the more you load it, a file-sharing network like bittorrent speeds the more nodes connected.
What about not running the function at all (NOOP)? or using a fixed value. Does that count?
This question isn't as stupid as it might seem. At least theoretically, something such as O(1/n) is completely sensible when we take the mathematical definition of the Big O notation:
Now you can easily substitute g(x) for 1/x … it's obvious that the above definition still holds for some f.
For the purpose of estimating asymptotic run-time growth, this is less viable … a meaningful algorithm cannot get faster as the input grows. Sure, you can construct an arbitrary algorithm to fulfill this, e.g. the following one:
def get_faster(list):
how_long = (1 / len(list)) * 100000
sleep(how_long)
Clearly, this function spends less time as the input size grows … at least until some limit, enforced by the hardware (precision of the numbers, minimum of time that sleep
can wait, time to process arguments etc.): this limit would then be a constant lower bound so in fact the above function still has runtime O(1).
But there are in fact real-world algorithms where the runtime can decrease (at least partially) when the input size increases. Note that these algorithms will not exhibit runtime behaviour below O(1), though. Still, they are interesting. For example, take the very simple text search algorithm by Horspool. Here, the expected runtime will decrease as the length of the search pattern increases (but increasing length of the haystack will once again increase runtime).
If the answer is the same regardless of the input data then you have an O(0) algorithm.
or in other words - the answer is known before the input data is submitted - the function could be optimised out - so O(0)
I don't understand the mathematics but the concept appears to be looking for a function that takes less time as you add more inputs? In that case what about:
def f( *args ):
if len(args)<1:
args[1] = 10
This function is quicker when the optional second argument is added because otherwise it has to be assigned. I realise this isn't an equation but then the wikipeadia pages says big-O is often applied to computing systems as well.
Example: You are in a labyrinth. What's the way out? The answer is O(1) if you put the turns taken towards where you are on a LIFO but needs sorting, reverse and slower, if put on a FIFO instead. I suppose any solution can end sorted before the query given that the data model is flexible between at least LIFO, FIFO or priority.
O(1) simply means "constant time".
When you add an early exit to a loop[1] you're (in big-O notation) turning an O(1) algorithm into O(n), but making it faster.
The trick is in general the constant time algorithm is the best, and linear is better then exponential, but for small amounts of n, the exponential algorith might actually be faster.
1: Assuming a static list length for this example
No, this is not possible:
As n tends to infinity in 1/n we eventually achieve 1/(inf), which is effectively 0.
Thus, the big-oh class of the problem would be O(0) with a massive n, but closer to constant time with a low n. This is not sensible, as the only thing that can be done in faster than constant time is:
void nothing() {};
And even this is arguable!
As soon as you execute a command, you're in at least O(1), so no, we cannot have a big-oh class of O(1/n)!
I think the source of the debate is possibly the misconception of Big-Oh and Big-theta notations.
In some answers and comments it was mentioned that a O(1/n)
can be considered O(1)
as the function is already bounded to some constant. This is true and indeed, it's the nature of Big-Oh notation. You can say QuickSort is O(n100) and you are technically correct as it's a valid upper bound (which is not very tight in this example).
As a result, technically, this question is obvious: If anything is less than O(1) (or O(whatever), for that matter), it would also be O(1) (O(>
whatever)), by definition.
In regular speech, we mostly use Big-Oh to mean tight bounds (instead of Big-theta, which is the correct notation for it). Basically, the question should not include the "or anything less than O(1)" part. Konrad's answer is completely right.
These notations are theoretical concepts. They don't take one important thing into account that currently limits us in practice: In practice, our machines are finite and this is where it seems like a paradox. In a finite machine, our "n
" always has an upper bound. Therefore "1/n
" will have a lower bound which is a constant. So the algorithm practically looks O(1)
in all finite state machines but because our n can never become big enough.
To those who believe that there are no o(1)
(small-oh) algorithms in practice (as mentioned in some answers), I can argue that in practice all algorithms that eventually halt are also O(1)
. With the similar reasoning mentioned above, because the input itself is always bounded, any function of input will also be bounded (it can be very large, but it still has a constant bound!), so every algorithm is O(1)
in finite machines. (Put it differently, pigeonhole principle implies that every deterministic finite state machine with S
different states can have at most S
transitions before getting back to the original state. So any algorithm that eventually halts, which should not make the machine return to the start state, can have at most S-1
state transitions, so every algorithm will be O(1)).
As pointed out in the comments, computation, as defined by the Turing machine model, is inherently discrete. That will imply that 1/n
algorithms cannot exist in a discrete computational model. My point is, when using the asymptotic growth about practical implementation of algorithms, we are already ignoring some limitations (for good reasons). There can be algorithms with decreasing running times (till it reaches a bound, of course) as the input grows (as Konrad said). Similarly, one may skew the use of notation to describe the running time of an implementation of such an algorithm.
Big-O notation represents the worst case scenario for an algorithm which is not the same thing as its typical run time. It is simple to prove that an O(1/n) algorithm is an O(1) algorithm . By definition,
O(1/n) --> T(n) <= 1/n, for all n >= C > 0
O(1/n) --> T(n) <= 1/C, Since 1/n <= 1/C for all n >=C
O(1/n) --> O(1), since Big-O notation ignores constants (i.e. the value of C doesn't matter)
I often use O(1/n) to describe probabilities that get smaller as the inputs get larger -- for example, the probability that a fair coin comes up tails on log2(n) flips is O(1/n).
Nothing is smaller than O(1) Big-O notation implies the largest order of complexity for an algorithm
If an algorithm has a runtime of n^3 + n^2 + n + 5 then it is O(n^3) The lower powers dont matter here at all because as n -> Inf, n^2 will be irrelevant compared to n^3
Likewise as n -> Inf, O(1/n) will be irrelevant compared to O(1) hence 3 + O(1/n) will be the same as O(1) thus making O(1) the smallest possible computational complexity
I see an algorithm that is O(1/n) admittedly to an upper bound:
You have a large series of inputs which are changing due to something external to the routine (maybe they reflect hardware or it could even be some other core in the processor doing it.) and you must select a random but valid one.
Now, if it wasn't changing you would simply make a list of items, pick one randomly and get O(1) time. However, the dynamic nature of the data precludes making a list, you simply have to probe randomly and test the validity of the probe. (And note that inherently there is no guarantee the answer is still valid when it's returned. This still could have uses--say, the AI for a unit in a game. It could shoot at a target that dropped out of sight while it was pulling the trigger.)
This has a worst-case performance of infinity but an average case performance that goes down as the data space fills up.
Yes.
There is precisely one algorithm with runtime O(1/n), the "empty" algorithm.
For an algorithm to be O(1/n) means that it executes asymptotically in less steps than the algorithm consisting of a single instruction. If it executes in less steps than one step for all n > n0, it must consist of precisely no instruction at all for those n. Since checking 'if n > n0' costs at least 1 instruction, it must consist of no instruction for all n.
Summing up: The only algorithm which is O(1/n) is the empty algorithm, consisting of no instruction.
Here's a simple O(1/n) algorithm. And it even does something interesting!
function foo(list input) {
int m;
double output;
m = (1/ input.size) * max_value;
output = 0;
for (int i = 0; i < m; i++)
output+= random(0,1);
return output;
}
O(1/n) is possible as it describes how the output of a function changes given increasing size of input. If we are using the function 1/n to describe the number of instructions a function executes then there is no requirement that the function take zero instructions for any input size. Rather, it is that for every input size, n above some threshold, the number of instructions required is bounded above by a positive constant multiplied by 1/n. As there is no actual number for which 1/n is 0, and the constant is positive, then there is no reason why the function would constrained to take 0 or fewer instructions.
In numerical analysis, approximation algorithms should have sub-constant asymptotic complexity in the approximation tolerance.
class Function
{
public double[] ApproximateSolution(double tolerance)
{
// if this isn't sub-constant on the parameter, it's rather useless
}
}
There are sub-linear algorithms. In fact, the Bayer-Moore search algorithm is a very good example of one.
As has been pointed out, apart from the possible exception of the null function, there can be no O(1/n)
functions, as the time taken will have to approach 0.
Of course, there are some algorithms, like that defined by Konrad, which seem like they should be less than O(1)
in at least some sense.
def get_faster(list):
how_long = 1/len(list)
sleep(how_long)
If you want to investigate these algorithms, you should either define your own asymptotic measurement, or your own notion of time. For example, in the above algorithm, I could allow the use of a number of "free" operations a set amount of times. In the above algorithm, if I define t' by excluding the time for everything but the sleep, then t'=1/n, which is O(1/n). There are probably better examples, as the asymptotic behavior is trivial. In fact, I am sure that someone out there can come up with senses that give non-trivial results.
many people have had the correct answer (No) Here's another way to prove it: In order to have a function, you have to call the function, and you have to return an answer. This takes a certain constant amount of time. EVEN IF the rest of the processing took less time for larger inputs, printing out the answer (Which is we can assume to be a single bit) takes at least constant time.
I had such a doubt way back in 2007, nice to see this thread, i came to this thread from my reddit thread -> http://www.reddit.com/r/programming/comments/d4i8t/trying_to_find_an_algorithm_with_its_complexity/