Are there any O(1/n) algorithms?

Or anything else which is less than O(1)?

Are there any O(1/n) algorithms?

Or anything else which is less than O(1)?

+3
A:

I believe quantum algorithms can do multiple computations "at once" via superposition...

I doubt this is a useful answer.

Jeff Meatball Yang
2009-05-25 06:20:14

That would still be constant time, i.e. O(1), meaning it takes the same amount of time to run for data of size n as it does for data of size 1.

freespace
2009-05-25 06:22:56
Good point. The main use of quantum algorithms is to tackle exponential classical algorithms to bring them down to polynomial time. No algorithm would get faster as n grows lager, as O(1/n) would imply.

Jeff Meatball Yang
2009-05-25 06:30:35
But what if the problem was a pale ale? (ah. hah. ha.)

Jeff Meatball Yang
2009-05-25 06:31:37
That would be a super position to be in.

Daniel Earwicker
2009-05-25 07:27:50
Quantum algorithms can do multiple computations, but you can only retrieve the result of one computation, and you can't choose which result to get. Thankfully, you can also do operations on a quantum register as a whole (for example, QFT) so you're much likelier to find something :)

Gracenotes
2009-05-25 08:02:55
I agree. This isn't useful.

Casebash
2010-07-09 04:33:24
it's perhaps not useful, but it has the advantage of being true, which puts it above some of the more highly voted answers B-)

Brian Postow
2010-07-09 19:20:17
+22
A:

That's not possible. The definition of Big-O is the *not greater than* inequality:

```
A(n) = O(B(n)) <=> exists constant C, C > 0 such that for all n A <= C*B
```

So the B(n) is in fact the maximum value, therefore if it decreases as n increases the estimation will not change.

sharptooth
2009-05-25 06:21:23

I suspect this answer is the "right one", but unfortunately I lack the intellect to understand it.

freespace
2009-05-25 06:24:55
AFAIK this condition does not have to be true for all n, but for all n > n_0 (i.e., only when the size of the input reaches a specific threshold).

__roland__
2009-05-25 07:37:03
I don't see how the definition (even corrected) contradicts the question of the OP. The definition holds for completely arbitrary functions! 1/n is a completely sensible function for B, and in fact your equation doesn't contradict that (just do the math).So no, despite much consensus, this answer is in fact *wrong*. Sorry.

Konrad Rudolph
2009-05-25 08:00:14
I wouldn't say the answer is wrong. It's just incomplete (and the part that's missing has nothing to do with the math). While O(1/n) is perfectly fine mathematically speaking (I've already seen this for discussing overhead), we're still talking about an algorithm's *time complexity*, where O(1/n) would imply as a limiting case that solving an infinitely large problem costs an infinitely small amount of time. Which sort of answers the question, at least for me :)

__roland__
2009-05-25 08:32:43
Wrong! I don't like downvoting but you state that this is impossible when there is no clear consensus.In practice you are correct, if you do construct a function with 1/n runtime (easy) it will eventually hit the some minimum time, effectively making it an O(1) algorithm when implemented. There is nothing to stop the algorithm from being O(1/n) on paper though.

jheriko
2009-05-25 09:56:40
@jheriko correct, although as n tends to infinity a O(1/n) algorithm would theoretically take no time at all to exectute (tending to O(0)), which is obviously idiotic, thus the minimum possible big-oh class is O(1). Oh, and yes, this answer is not the correct one for this question so -1 =]

Ed Woodcock
2009-05-25 12:36:02
@Roland: If the condition holds for sufficiently large n for some constant, you can find another constant so that the condition holds for all n. The converse holds and thus the two definitions are equivalent.

Jason
2009-05-25 13:21:03
@Jason: Yep, now that you say it... :)@jheriko: A time complexity of O(1/n) does not work on paper IMHO. We're characterizing the growth function f(input size) = #ops for a Turing machine. If it does halt for an input of length n=1 after x steps, then I will choose an input size n >> x, i.e. large enough that, if the algorithm is indeed in O(1/n), no operation should be done. How should a Turing machine even notice this (it's not allowed to read once from the tape)?

__roland__
2009-05-25 13:54:34
@Konrad, jherico: Here's a proof (or just a restatement): If an algorithm is O(1/n), it means there is a constant C such that for all (sufficiently large) n, the time taken is T(n) < C*(1/n). Now take any n > 1/C. You would need T(n) < 1, i.e. T(n)=0, which is not possible.

ShreevatsaR
2009-05-25 20:07:46
-1, not because the conclusion is incorrect (I believe it is correct), but because I can't see any sensible argument that leads to the stated conclusion. At the very least it's missing some important steps. (Also, as __roland__ observes, it's definition is wrong -- the condition only has to be true for all n above some threshold value n_0.)

j_random_hacker
2009-05-26 11:22:39
The shortest program you can have would have to execute at least single statement. So, regardless of anything else that happens as the input size grows, you have at least O(1). Whether it performs even worse for smaller inputs than it does for larger inputs, is irrelevant to the fact that it will always AT LEAST one statement to execute, so it will be at least O(1).

kenj0418
2010-02-28 17:01:06
@Jason, can you further explain why the two definitions are equivalent? (with and without the "n > n0") Are you saying those two definitions are always equivalent, or only for O(1/n)? If always equivalent, why is "n > n0" part of the definition of O( ) notation at all?

LarsH
2010-09-29 05:30:14
@LarsH: They are always equivalent. It is clear that if you have some constant `C` such that for all `n` we have `A(n) <= C * B(n)` then clearly we have a constant `n_0` such that for all `n > n_0` we have `A(n) <= C * B(n)` (just use `n_0 = 0`). The converse is almost as easy. Suppose that you have a constant `C` and an integer `n_0` such that `A(n) <= C * B(n)` for all `n > n_0`. Let `C_m = max(C, max(A(n) / B(n) | 1 <= n <= n_0))`. Then `C_m` satisfies `C_m >= C` so that `A(n) <= C_m * B(n)` when `n > n_0`. Further, it is chosen so that `A(n) <= C_m * B(n)` when `n <= n_0`.

Jason
2010-09-29 21:12:05
@LarsH: To note, for simplicity I am assuming that our functions `A` and `B` are defined on the positive natural numbers. It is clear how to extend this to a more general scenario.

Jason
2010-09-29 21:23:46
@Jason: OK I'm back, and I believe I got it: given that there are a finite number of values of n below n_0, you can always find a constant that satisfies `A(n) <= C_m * B(n)` for all of them. But then why is Big O notation defined using n_0? Isn't it redundant?

LarsH
2010-09-29 22:01:00
@LarsH: You are correct. In response to your question, mathematicians try to get away with the weakest definition that they can. Further, keep in mind that `O` is about what happens asymptotically and this is clearly reflected in the definition that has the `n > n_0` clause.

Jason
2010-09-30 19:16:51
A:

OK, I did a bit of thinking about it, and perhaps there exists an algorithm that could follow this general form:

You need to compute the traveling salesman problem for a 1000 node graph, however, you are also given a list of nodes which you cannot visit. As the list of unvisitable nodes grows larger, the problem becomes easier to solve.

Shalmanese
2009-05-25 06:32:12

It's different kind of n in the O(n) then. With this trick you could say every algorithm has O(q) where q is number of people living in China for example.

vava
2009-05-25 06:35:14
Boyer-Moore is of a similar kind (O(n/m)), but that's not really "better than O(1)", because n >= m. I think the same is true for your "unvisitable TSP".

Niki
2009-05-25 06:39:13
Even in this case the runtime of the TSP is NP-Complete, you're simply removing nodes from the graph, and therefore effectively decreasing n.

Ed Woodcock
2009-05-25 12:41:18
+2
A:

If solution exists, it can be prepared and accessed in constant time=immediately. For instance using a LIFO data structure if you know the sorting query is for reverse order. Then data is already sorted, given that the appropriate model (LIFO) was chosen.

Larsson
2009-05-25 06:32:22

+1
A:

O(1/n) is not less then O(1), it basically means that the more data you have, the faster algorithm goes. Say you get an array and always fill it in up to a 10^{100} elements if it has less then that and do nothing if there's more. This one is not O(1/n) of course but something like O(-n) :) Too bad O-big notation does not allow negative values.

vava
2009-05-25 06:32:24

"O(1/n) is not less then O(1)" -- if a function f is O(1/n), it's also O(1). And big-oh feels a lot like a "lesser than" relation: it's reflexive, it's transitive, and if we have symmetry between f and g the two are equivalent, where big-theta is our equivalence relation. ISTR "real" ordering relations requiring a <= b and b <= a to imply a = b, though, and netcraft^W wikipedia confirms it.So in a sense, it's fair to say that indeed O(1/n) is "less than" O(1).

Jonas Kölker
2009-05-26 03:15:12
A:

What about this:

```
void FindRandomInList(list l)
{
while(1)
{
int rand = Random.next();
if (l.contains(rand))
return;
}
}
```

as the size of the list grows, the expected runtime of the program decreases.

Shalmanese
2009-05-25 06:37:12

ok, the random function can be thought of as a lazy array, so you're basically searching each element in the "lazy random list" and checking whether it's contained in the input list. I think this is *worse* than linear, not better.

hasen j
2009-05-25 06:54:40
He's got some point if you notice that int has limited set of values. So when l would contain 2<sup>64</sup> values it's going to be instantaneous all the way. Which makes it worse than O(1) anyway :)

vava
2009-05-25 07:01:14
+9
A:

From my previous learning of big O notation, even if you need 1 step (such as checking a variable, doing an assignment), that is O(1).

Note that O(1) is the same as O(6), because the "constant" doesn't matter. That's why we say O(n) is the same as O(3n).

So if you need even 1 step, that's O(1)... and since your program at least needs 1 step, the minimum an algorithm can go is O(1). Unless if we don't do it, then it is O(0), I think? If we do anything at all, then it is O(1), and that's the minimum it can go.

(If we choose not to do it, then it may become a Zen or Tao question... in the realm of programming, O(1) is still the minimum).

Or how about this:

**programmer**: boss, I found a way to do it in O(1) time!

**boss**: no need to do it, we are bankrupt this morning.

**programmer**: oh then, it becomes O(0).

動靜能量
2009-05-25 06:47:38

Your joke reminded me of something from the Tao of Programming: http://www.canonical.org/~kragen/tao-of-programming.html#book8 (8.3)

kenj0418
2010-02-28 17:06:32
+2
A:

You can't go below O(1), however O(k) where k is less than N is possible. We called them sublinear time algorithms. In some problems, Sublinear time algorithm can only gives approximate solutions to a particular problem. However, sometimes, an approximate solutions is just fine, probably because the dataset is too large, or that it's way too computationally expensive to compute all.

Hao Wooi Lim
2009-05-25 06:51:32

Not sure I understand. Log(N) is less than N. Does that mean that Log(N) is a sublinear algorithm? And many Log(N) algorithms do exist. One such example is finding a value in a binary tree. However, these are still different than 1/N, Since Log(N) is always increasing, while 1/n is a decreasing function.

Kibbee
2009-05-25 14:41:57
Looking at definition, sublinear time algorithm is any algorithm whose time grows slower than size N. So that includes logarithmic time algorithm, which is Log(N).

Hao Wooi Lim
2009-05-26 02:08:35
Uh, sublinear time algorithms can give exact answers, e.g. binary search in an ordered array on a RAM machine.

A. Rex
2009-05-28 00:39:33
+14
A:

sharptooth is correct, O(1) is the best possible performance. However, it does not imply a fast solution, just a fixed time solution.

An interesting variant, and perhaps what is really being suggested, is which problems get *easier* as the population grows. I can think of 1, albeit contrived and tongue-in-cheek answer:

Do any two people in a set have the same birthday? When n exceeds 365, return true. Although for less than 365, this is O(n ln n). Perhaps not a great answer since the problem doesn't slowly get easier but just becomes O(1) for n > 365.

Adrian
2009-05-25 07:01:49

You are correct. Like computers, I am occasionally subject to rounding errors :-)

Adrian
2009-05-26 00:14:29
+1. There are a number of NP-complete problems that undergo a "phase transition" as n increases, i.e. they quickly become much easier or much harder as you exceed a certain threshold value of n. One example is the Number Partitioning Problem: given a set of n nonnegative integers, partition them into two parts so that the sum of each part is equal. This gets dramatically easier at a certain threshold value of n.

j_random_hacker
2009-05-26 11:34:41
+2
A:

Which problems get easier as population grows? One answer is a thing like bittorrent where download speed is an inverse function of number of nodes. Contrary to a car, which slows down the more you load it, a file-sharing network like bittorrent speeds the more nodes connected.

LarsOn
2009-05-25 07:22:06

Yes, but the number of bittorrent nodes is more like the number of processors in a parallel computer. The "N" in this case would be the size of the file trying to be downloaded. Just as you could find an element in an unsorted array of length N in constant time if you had N computers, you could download a file of Size N in constant time if you had N computers trying to send you the data.

Kibbee
2009-05-25 14:54:28
+7
A:

What about not running the function at all (NOOP)? or using a fixed value. Does that count?

SpliFF
2009-05-25 08:10:03

Right, that's still O(1). I don't see how someone can understand this, and yet claim in another answer that something less than NO-OP is possible.

ShreevatsaR
2009-05-25 20:21:20
ShreevatsaR: there is absolutely no contradiction. You seem to fail to grasp that big O notation has got *nothing to do* with the time spent in the function – rather, it describes how that time *changes* with changing input (above a certain value). See other comment thread for more.

Konrad Rudolph
2009-05-25 20:42:47
I grasp it perfectly well, thank you. The point — as I made several times in the other thread — is that if the time decreases with input, at rate O(1/n), then it must eventually decrease below the time taken by NOOP. This shows that no algorithm can be O(1/n) asymptotically, although certainly its runtime can decrease up to a limit.

ShreevatsaR
2010-07-09 07:25:38
@ShreevatsaR: It depends on whether you define the null algorithm to take a small amount of time or no time at all

Casebash
2010-07-09 10:56:43
Yes... as I said elsewhere, any algorithm that is O(1/n) should also take zero time for all inputs, so depending on whether you consider the null algorithm to take 0 time or not, there is an O(1/n) algorithm. So *if* you consider NOOP to be O(1), *then* there are no O(1/n) algorithms.

ShreevatsaR
2010-07-09 16:51:35
+135
A:

This question isn't as stupid as it might seem. At least theoretically, something such as *O*(1/*n*) is completely sensible when we take the mathematical definition of the Big O notation:

Now you can easily substitute *g*(*x*) for 1/*x* … it's obvious that the above definition still holds for some *f*.

For the purpose of estimating asymptotic run-time growth, this is less viable … a meaningful algorithm cannot get faster as the input grows. Sure, you can construct an arbitrary algorithm to fulfill this, e.g. the following one:

```
def get_faster(list):
how_long = (1 / len(list)) * 100000
sleep(how_long)
```

Clearly, this function spends less time as the input size grows … at least until some limit, enforced by the hardware (precision of the numbers, minimum of time that `sleep`

can wait, time to process arguments etc.): this limit would then be a constant lower bound so in fact the above function *still* has runtime *O*(1).

But there *are* in fact real-world algorithms where the runtime can decrease (at least partially) when the input size increases. Note that these algorithms will *not* exhibit runtime behaviour below *O*(1), though. Still, they are interesting. For example, take the very simple text search algorithm by Horspool. Here, the expected runtime will decrease as the length of the search pattern increases (but increasing length of the haystack will once again increase runtime).

Konrad Rudolph
2009-05-25 08:10:15

+1 this is the correct answer. Basically, the OP really asks for a function that's o(1) (small-oh) which does indeed exist (as you demonstrated).

Mehrdad Afshari
2009-05-25 13:10:00
'Enforced by the hardware' also applies to a Turing Machine. In case of O(1/n) there will always be an input size for which the algorithm is not supposed to execute any operation. And therefore I would think that O(1/n) time complexity is indeed impossible to achieve.

__roland__
2009-05-25 14:10:03
@__roland__: Not true. O notation is not really related to steps an algorithm takes but the *growth* of time function as a function of input. The step-wise metaphor of O-notation is an unfortunate consequence of machines we're working on being finite (as I described in my answer). It's not what O is really about.

Mehrdad Afshari
2009-05-25 14:23:17
@Mehrdad: I am not talking about the O-notation as such (this just describes a set of functions, no connection to time whatsoever), but the notion of an *algorithm* is usually defined on a Turing machine. A TM proceeds in discrete steps. When talking about the time complexity of an algorithm, you're really discussing the growth function of the number of *steps* an equivalent TM has to take for a solution. That's why I doubt that there is an O(1/n) algorithm (=TM).

__roland__
2009-05-25 14:42:43
@__roland__: Yes, but it's not *absolute* number of steps it takes to run an algorithm. It's about the rate of growth of steps as N becomes larger. If an algorithm takes a half of steps for input=N+1 relative to input=N. It'll be O(1/2^N).

Mehrdad Afshari
2009-05-25 14:53:54
@Mehrdad: You are right, absolute numbers are abstracted away by the constants in the definition. But with O(1/n) the absolute number of steps doesn't matter at all: since 1/n is approaching zero I can always choose a large enough n to let any (discrete!) number of steps be zero, no matter what constants are used.

__roland__
2009-05-25 15:10:46
@__roland__: And that's another reason this is not possible in finite machines, but in a Turing machine, it can take infinite steps to process Algo(N=0) :) For any N you choose, I can choose initial time large enough so that it doesn't approach zero!

Mehrdad Afshari
2009-05-25 15:16:20
@Mehrdad: The game is just the other way round: you have to define the algorithm first, then this algorithm's complexity can be expressed with O(*) [which includes choosing N]. BTW a TM is finite in that is has a finite number of states, and (theoretically) infinite runtime like you describe is easily possible in finite machines (if(input.size()==1)while(true){}). But this of course won't help.

__roland__
2009-05-25 15:24:12
@__roland__: That's just how we're used to look at it, as we're using finite machines. A TM is infinite as it has infinite tape (and thus, can be in infinitely different situations, this is what I call a state not the Q set that's finite). Note that a program that doesn't halt is *not* equal to a program that takes infinitely long to halt. You just got to think out of the box...

Mehrdad Afshari
2009-05-25 15:30:56
Mehrdad, you don't understand. The O notation is something about the *limit* (technically lim sup) as n -> ∞. The running time of an algorithm/program is the number of steps on some machine, and is therefore discrete -- there is a non-zero lower bound on the time that an algorithm can take ("one step"). It *is* possible that upto *some finite N* a program takes a number of steps decreasing with n, but the only way an algorithm can be O(1/n), or indeed o(1), is if it takes time 0 for all sufficiently large n -- which is not possible.

ShreevatsaR
2009-05-25 20:04:01
ShreevatsaR: You are misinterpreting the limit: it's not required to take time 0 for all sufficiently large N but it approaches 0 as N approaches infinity. If you assume T(0) = infinity, T(N+1) = T(N)/2, this would satisfy the condition of O(1/2^N). It doesn't need to be *equal* to zero for a given large value of N. It just approaches zero in *infinity*, which is not an issue. O notation requires the N0 in `n > N0` to be a real number.

Mehrdad Afshari
2009-05-25 20:37:57
ShreevatsaR: Your discretization is all well and nice but since we're talking theory here anyway I'll just go ahead and define that the `sleep` function in my example has no discrete step size, so there! Please don't make this discussion even more confusing by mixing theoretical concepts and reality. All are in agreement: in reality, there is no O(1/n) runtime. Satisfied? (But then, in reality, all algorithms are O(1) due to Von Neumann architecture limitations. This gets us absolutely nowhere.)

Konrad Rudolph
2009-05-25 20:47:04
We are not disagreeing that O(1/n) *functions* (in the mathematical sense) exist. Obviously they do. But computation is inherently discrete. Something that has a lower bound, such as the running time of a program -- on either the von Neumann architecture or a purely abstract Turing machine -- *cannot* be O(1/n). Equivalently, something that is O(1/n) cannot have a lower bound. (Your "sleep" function has to be invoked, or the variable "list" has to be examined -- or the input tape has to be examined on a Turing machine. So the time taken would change with n as some ε + 1/n, which is not O(1/n))

ShreevatsaR
2009-05-26 00:14:55
@ShreevatsaR: Thanks, that way exactly my point. There simply is no TM (=algorithm) for 'sleeping' that doesn't require a discrete step size - not even in theory ;-)

__roland__
2009-05-26 05:31:52
@Mehrdad: "f(n) = O(g(n))" means that, whenever n is above some "threshold" value N0, f(n) must be <= M*g(n). The main thing to understand is that although you can choose the constants N0 and M freely, you must choose them **beforehand** -- they are not allowed to be functions of n. And, for *any* choice of N0 and M that you make, any algorithm that takes 1 or more steps will eventually exceed M*(1/n) (i.e. there is some input size k such that k > N0 and f(k) > M*(1/k)).

j_random_hacker
2009-05-26 11:16:46
@j_random_hacker: I know. What I said was a response to the discretization idea, not the limit. What I was basically saying is that if "T(0) = infinity" rather than a real number, it's possible for T(n) not too reach "= 0" except in infinity. It can be arbitrarily close but since you already have infinite number of steps in range 0..1, it's no longer supposed to be 0. It's mapping an infinite number of discrete points to a finite range. Anyway, I think both of them make sense. I prefer not to discuss any further as I'm not a math expert.

Mehrdad Afshari
2009-05-26 11:46:34
@Mehrdad: OK. I have to confess that the difference between programs that don't halt and those that take infinitely long to halt is too much for my maths as well... :)

j_random_hacker
2009-05-26 13:05:33
If T(0)=∞, it doesn't halt. There is no such thing as "T(0)=∞, but it still halts". Further, even if you work in R∪{∞} and define T(0)=∞, and T(n+1)=T(n)/2, then T(n)=∞ for all n. Let me repeat: if a discrete-valued function is O(1/n), then for all sufficiently large n it is 0. [Proof: T(n)=O(1/n) means there exists a constant c such that for n>N0, T(n)<c(1/n), which means that for any n>max(N0,1/c), T(n)<1, which means T(n)=0.] No machine, real or abstract, can take 0 time: it *has* to look at the input. Well, besides the machine that never does anything, and for which T(n)=0 for all n.

ShreevatsaR
2009-05-27 03:50:31
ShreevatsaR: I'm not an expert here. What you are saying seems right but I'm not still sure that computation is carried on using discrete steps inherently. While it might be for TM model, there are other models such as quantum computers out there. Anyway, the whole question was a great discussion. Unfortunately, I don't think I have enough knowledge to carry it on.

Mehrdad Afshari
2009-05-27 20:22:37
Even in quantum computers, (measurable) changes only happen discretely. In any case, discreteness is actually not crucial: the initial step (of examining the input) takes some time (say ε), so even if we ignore the "sleep" time, we have T(n) ≥ ε for all n, while on the other hand if T(n) = O(1/n), we would have had T(n) -> 0 as n -> ∞.

ShreevatsaR
2009-05-28 00:25:15
You have to like any answer that begins "This question isn't as stupid as it might seem."

Telemachus
2009-06-27 12:53:17
Really interesting discussion. And to prove ShreevatsaR's point further, even a theoretical oracle machine needs to do **one** operation, so not even an oracle can do stuff in O(1/n) → O(0) because it would make no sense.

David Titarenco
2010-07-09 01:43:59
Really, the fact that this answer got nearly 100 votes while it was still incorrect shows the worst thing about Stack Overflow — answers that *look* correct get voted higher than actually correct answers. (But thanks to Konrad Rudolph for finally editing his answer and making it correct.)

ShreevatsaR
2010-07-09 03:14:06
@ShreevatsaR: This post was nowhere near to 100 votes until *long after* my edit on June 27 '09. But I’d like to point out that this edit served as clarification only. That was what I’d meant all along. But I admit that this point was unclear and easy to misunderstand, and I made the error worse in the comments by confusing the mathematical concept with its application in computer science. (And I very much enjoyed the discussion. Your explanations were well received.)

Konrad Rudolph
2010-07-09 08:48:14
Sorry, I just looked at the edits, and you're right: I must admit there was nothing strictly wrong in the first answer, although it it was missing an explicit "no" :-) (and the limit is not a practical limitation of hardware but even in theory). I must have got confused with the *discussion* in the comments here and with other answers, sorry. But (especially after the clarification :p) this is really a great answer, both the sleep(1/n) example and the actual Boyer-Horspool algorithm are very good examples of how runtime can decrease within certain ranges. Apologies for the previous comment!

ShreevatsaR
2010-07-09 09:10:36
To put it more clearly: the direct answer to the given question: "Are there any O(1/n) algorithms? Or anything else which is less than O(1)?" is "No", for trivial reasons. (The time would have to go arbitrarily small, but even the tiniest operation like reading the input takes some fixed time.) But *beyond* this trivial answer, there is still an interesting question, which is whether run time can ever decrease, even for a while, and your answer gives excellent examples. I think because this answer had only the second part initially, I got the impression it wasn't correct (until I just looked).

ShreevatsaR
2010-07-09 09:21:12
@Brian: it isn’t. `list` in the sample code obviously means some generic list, not necessarily a linked list.

Konrad Rudolph
2010-07-09 20:38:44
A:

If the answer is the same regardless of the input data then you have an O(0) algorithm.

or in other words - the answer is known before the input data is submitted - the function could be optimised out - so O(0)

pro
2009-05-25 08:15:13

Really? You would still need to return a value, so wouldn't it still be O(1)?

Joachim Sauer
2009-05-25 08:51:15
no, O(0) would imply it takes zero time for all inputs. O(1) is constant time.

Pete Kirkham
2009-05-25 08:51:26
A:

I don't understand the mathematics but the concept appears to be looking for a function that takes less time as you add more inputs? In that case what about:

```
def f( *args ):
if len(args)<1:
args[1] = 10
```

This function is quicker when the optional second argument is added because otherwise it has to be assigned. I realise this isn't an equation but then the wikipeadia pages says big-O is often applied to computing systems as well.

SpliFF
2009-05-25 08:24:35

A:

Example: You are in a labyrinth. What's the way out? The answer is O(1) if you put the turns taken towards where you are on a LIFO but needs sorting, reverse and slower, if put on a FIFO instead. I suppose any solution can end sorted before the query given that the data model is flexible between at least LIFO, FIFO or priority.

LarsOn
2009-05-25 09:55:30

+6
A:

O(1) simply means "constant time".

When you add an early exit to a loop[1] you're (in big-O notation) turning an O(1) algorithm into O(n), but making it faster.

The trick is *in general* the constant time algorithm is the best, and linear is better then exponential, but for small amounts of n, the exponential algorith might actually be faster.

1: Assuming a static list length for this example

LapTop006
2009-05-25 12:28:45

+5
A:

No, this is not possible:

As n tends to infinity in 1/n we eventually achieve 1/(inf), which is effectively 0.

Thus, the big-oh class of the problem would be O(0) with a massive n, but closer to constant time with a low n. This is not sensible, as the only thing that can be done in faster than constant time is:

`void nothing() {};`

And even this is arguable!

As soon as you execute a command, you're in at least O(1), so no, we cannot have a big-oh class of O(1/n)!

Ed Woodcock
2009-05-25 12:45:40

+49
A:

I think the source of the debate is possibly the misconception of Big-Oh and Big-theta notations.

In some answers and comments it was mentioned that a `O(1/n)`

can be considered `O(1)`

as the function is already bounded to some constant. This is true *and indeed, it's the nature of Big-Oh notation. You can say QuickSort is O(n ^{100}) and you are technically correct as it's a valid upper bound (which is not very tight in this example)*.

As a result, technically, this question is obvious: If anything is less than O(1) *(or O(whatever), for that matter)*, it would **also** be O(1) *(O( > whatever))*, by definition.

In regular speech, we mostly use Big-Oh to mean tight bounds (instead of Big-theta, which is the correct notation for it). Basically, the question should not include the "or anything less than O(1)" part. Konrad's answer is completely right.

These notations are **theoretical concepts**. They don't take one important thing into account that currently limits us in practice: In practice, our machines are **finite** and this is where it seems like a paradox. In a finite machine, our "`n`

" **always** has an upper bound. Therefore "`1/n`

" will have a lower bound which is a constant. So the algorithm practically looks `O(1)`

in **all finite state machines** but because our **n** can never become big enough.

To those who believe that there are no `o(1)`

(small-oh) algorithms in practice (as mentioned in some answers), I can argue that in practice all algorithms that eventually halt are also `O(1)`

. With the similar reasoning mentioned above, because the input itself is always bounded, any function of input will also be bounded (it can be very large, but it still has a constant bound!), so every algorithm is `O(1)`

in finite machines. (Put it differently, pigeonhole principle implies that every deterministic finite state machine with `S`

different states can have at most `S`

transitions before getting back to the original state. So any algorithm that eventually halts, which should not make the machine return to the start state, can have at most `S-1`

state transitions, so every algorithm will be O(1)).

As pointed out in the comments, computation, as defined by the Turing machine model, is inherently discrete. *That will imply that 1/n algorithms cannot exist in a discrete computational model*. My point is, when using the asymptotic growth about

Mehrdad Afshari
2009-05-25 12:47:34

True, all algorithms in practice (on finite machines / with bounded tape) are O(1), but this is irrelevant. There are no o(1) algorithms either in practice or *in theory*.

ShreevatsaR
2009-05-25 20:10:09
"I can argue that in practice all algorithms that eventually halt are also O(1)!"And I can argue that breaking a 4096-bit RSA key pair in practice takes exponential time ;-)

Jonas Kölker
2009-05-26 03:10:09
Many good points, but I think your first part is not actually relevant: yes, any function that is O(1/n) is also O(1) (and O(n^100)), but that is not the question -- the question is, do any O(1/n) algorithms exist? (I.e. the question is about the converse situation, and your reasoning says nothing about that. E.g. it certainly does not follow that any O(1) algorithm is also O(1/n).)

j_random_hacker
2009-05-26 11:29:29
@j_random_hacker: That's the question title, but the body specifies anything less that O(1) which I think caused misinterpretation and I felt to clarify that fact too.

Mehrdad Afshari
2009-05-26 11:47:54
@Mehrdad: I see what you mean, but I think it still makes sense to talk about a function being "less than O(g(x))" if you mentally substitute "Theta(g(x))" for "O(g(x))". (People often say O(*) when they mean Theta(*).) But you're right to make the point that people shouldn't be so sloppy.

j_random_hacker
2009-05-26 13:00:19
Actually, the "anything else which less than O(1)" bit is valid, but the technically correct wording is "anything which is not Ω(1)". Whereas big O gives an upper bound for a function's order of growth, big omega gives a lower bound. Big theta is giving the order of growth precisely.

Stewart
2009-08-17 09:48:39
-1: No algorithm can be O(1/n) unless it is willing to take 0 time to answer a question. So, no, its not about a difference in theory and practice. It is true that all algorithms are "in practice" O(1) in the way you have defined, but that way is almost meaningless. The whole point of O notation is to measure growth - your results lead to no meaningful conclusions

Casebash
2010-07-09 04:56:05
This answer is more or less tangential to the question, and yet it gets voted up highly simply because it points out O versus theta and *looks* correct. It says that theory and practice are different, but this is irrelevant because there are no O(1/n) algorithms even in theory. The fact that answers like this are voted up for "looking correct" is the worst thing about Stack Overflow.

ShreevatsaR
2010-07-09 07:12:12
@ShreevastsaR: I've accepted your correct arguments long ago. Konrad's answer sums it up pretty good. Having the choice to leave this answer or delete it altogether, I chose the former for these reasons: 1. While they may be tangential, some points of my answer are indeed true and I think they add value to the discussion. 2. My original idea about the answer, even if I couldn't say it clearly, is still valid IMO: ...

Mehrdad Afshari
2010-07-09 11:24:48
... you can have a function with decreasing running time in proportion of input long enough to be a useful, real-world usage of O(1/n), just like O(n^2) is a useful usage for a (constrained input size) BubbleSort implementation in practice. (as demonstrated by Konrad's examples) 3. Deleting the answer will remove the discussion, history, and valuable comments.

Mehrdad Afshari
2010-07-09 11:25:52
Ah now you're not talking theory any more. :-) By definition (in theory), O(1/n) is an asymptotic bound as n→∞, and so there are no O(1/n) algorithms (except, depending on your definitions, the null algorithm with 0 runtime). As you yourself point out, we must not confuse theory and practice, "O-notation is about algorithms, not practical implementations of them", and "in practice" all implementations on real hardware are O(1) so it doesn't make sense to talk of them. Anyway, glad we agree finally.

ShreevatsaR
2010-07-09 16:56:03
Your answer still contains the phrase "not because it's not O(1/n)", and it would be good if you removed it. Although your answer has useful points, it seems to make it appear that the objection that there are no O(1/n) algorithms has something to do with practical limitations, when the objection is entirely theoretical (on Turing machines). So if you could include a line at the top on the correct fact, it would be an improvement. But, your wish, and the wish of the rest of the community if they want to be misled. It's a trivial point anyway, and perhaps being right is not so important.

ShreevatsaR
2010-07-09 17:00:25
@ShreevatsaR: In fact, I agreed with you as soon as I stopped responding to comment thread. It was an enlightening discussion for me. Thanks for that.

Mehrdad Afshari
2010-07-09 18:21:11
A:

Big-O notation represents the **worst case scenario** for an algorithm which is not the same thing as its typical run time. It is simple to prove that an O(1/n) algorithm is an O(1) algorithm . By definition,

O(1/n) --> T(n) <= 1/n, for all n >= C > 0

O(1/n) --> T(n) <= 1/C, Since 1/n <= 1/C for all n >=C

O(1/n) --> O(1), since Big-O notation ignores constants (i.e. the value of C doesn't matter)

Lawrence Barsanti
2009-05-25 13:18:41

No: Big O notation is also used to talk about average-case and expected time (and even best-case) scenarios. The rest follows.

Konrad Rudolph
2009-05-25 13:23:19
The 'O' notation certainly defines an *upper bound* (in terms of algorithmic complexity, this would be the worst case). Omega and Theta are used to denote best and average case, respectively.

__roland__
2009-05-25 13:59:33
Roland: That's a misconception; upper bound is not the same thing as worst-case, the two are independent concepts. Consider the expected (and average) runtime of the `hashtable-contains` algorithm which can be denoted as O(1) -- and the worst case can be given very precisely as Theta(n)! Omega and Theta may simply be used to denote other bounds but *to say it again*: they have got nothing to do with average or best case.

Konrad Rudolph
2009-05-25 14:17:44
Konrad: True. Still, Omega, Theata and O are usually used to *express* bounds, and if *all* possible inputs are considered, O represents the upper bound, etc.

__roland__
2009-05-25 15:03:13
The fact that O(1/n) is a subset of O(1) is trivial and follows directly from the definition. In fact, if a function g is O(h), then any function f which is O(g) is also O(h).

Tobias
2009-06-11 17:43:59
+5
A:

I often use O(1/n) to describe probabilities that get smaller as the inputs get larger -- for example, the probability that a fair coin comes up tails on log2(n) flips is O(1/n).

Dave
2009-05-25 17:52:18

That's not what big O is though. You can't just redefine it in order to answer the question.

Zifre
2009-05-25 20:11:26
I am a theoretical computer scientist by trade. It's about the asymptotic order of a function.

Dave
2009-05-25 23:03:10
Big O is a property of an arbitrary real function. Time complexity is just one of its possible applications. Space complexity (the amount of working memory an algorithm uses) is another. That the question is about O(1/n) _algorithms_ implies that it's one of these (unless there's another that applies to algorithms that I don't know about). Other applications include orders of population growth, e.g. in Conway's Life. See also http://en.wikipedia.org/wiki/Big_O_notation

Stewart
2009-08-17 15:51:29
@Dave: The question wasn't whether there exist O(1/n) functions, which obviously do exist. Rather, it was whether there exist O(1/n) algorithms, which (with the possible exception of the null function) can't exist

Casebash
2010-07-09 10:58:30
A:

Nothing is smaller than O(1) Big-O notation implies the largest order of complexity for an algorithm

If an algorithm has a runtime of n^3 + n^2 + n + 5 then it is O(n^3) The lower powers dont matter here at all because as n -> Inf, n^2 will be irrelevant compared to n^3

Likewise as n -> Inf, O(1/n) will be irrelevant compared to O(1) hence 3 + O(1/n) will be the same as O(1) thus making O(1) the smallest possible computational complexity

A:

I see an algorithm that is O(1/n) admittedly to an upper bound:

You have a large series of inputs which are changing due to something external to the routine (maybe they reflect hardware or it could even be some other core in the processor doing it.) and you must select a random but valid one.

Now, if it wasn't changing you would simply make a list of items, pick one randomly and get O(1) time. However, the dynamic nature of the data precludes making a list, you simply have to probe randomly and test the validity of the probe. (And note that inherently there is no guarantee the answer is still valid when it's returned. This still could have uses--say, the AI for a unit in a game. It could shoot at a target that dropped out of sight while it was pulling the trigger.)

This has a worst-case performance of infinity but an average case performance that goes down as the data space fills up.

Loren Pechtel
2009-05-28 00:36:35

+16
A:

Yes.

There is precisely one algorithm with runtime O(1/n), the "empty" algorithm.

For an algorithm to be O(1/n) means that it executes asymptotically in less steps than the algorithm consisting of a single instruction. If it executes in less steps than one step for all n > n0, it must consist of precisely no instruction at all for those n. Since checking 'if n > n0' costs at least 1 instruction, it must consist of no instruction for all n.

Summing up:
The only algorithm which is O(1/n) is the empty algorithm, consisting of *no* instruction.

Tobias
2009-06-11 17:40:02

So if someone asked what the time complexity of an empty algorithm is, you'd answer with O(1/n) ??? Somehow I doubt that.

phkahler
2010-02-16 20:42:58
This is the only correct answer in this thread, and (despite my upvote) it is at zero votes. Such is StackOverflow, where "correct-looking" answers are voted higher than actually correct ones.

ShreevatsaR
2010-02-24 17:15:36
No, its rated 0 because it is incorrect. Expressing a big-Oh value in relation to N when it is independent of N is incorrect. Second, running any program, even one that just exists, takes at least a constant amount of time, O(1). Even if that wasn't the case, it'd be O(0), not O(1/n).

kenj0418
2010-02-28 17:16:28
Any function that is O(0) is also O(1/n), and also O(n), also O(n^2), also O(2^n). Sigh, does no one understand simple definitions? O() is an upper bound.

ShreevatsaR
2010-04-15 03:52:53
@kenj0418 You managed to be wrong in every single sentence. "Expressing a big-Oh value in relation to N when it is independent of N is incorrect." A constant function is a perfectly goof function. "Second, running any program, even one that just exists, takes at least a constant amount of time, O(1)." The definition of complexity doesn't say anything about actually running any programs. "it'd be O(0), not O(1/n)". See @ShreevatsaR's comment.

Alexey Romanov
2010-08-24 20:12:49
That as well, but the point is that it isn't Ω(1). And why has my answer been downrated? If you think I'm wrong, how about explaining?

Stewart
2010-02-16 13:55:42
I asked elsewhere if, basically, this very answer is correct or not, and it seems to be disputed: http://stackoverflow.com/questions/3209139/is-the-time-complexity-of-the-empty-algorithm-o0

apollodude217
2010-07-11 14:48:06
A:

Here's a simple O(1/n) algorithm. And it even does something interesting!

```
function foo(list input) {
int m;
double output;
m = (1/ input.size) * max_value;
output = 0;
for (int i = 0; i < m; i++)
output+= random(0,1);
return output;
}
```

O(1/n) is possible as it describes how the output of a function changes given increasing size of input. If we are using the function 1/n to describe the number of instructions a function executes then there is no requirement that the function take zero instructions for any input size. Rather, it is that for every input size, n above some threshold, the number of instructions required is bounded above by a positive constant multiplied by 1/n. As there is no actual number for which 1/n is 0, and the constant is positive, then there is no reason why the function would constrained to take 0 or fewer instructions.

ejspencer
2009-09-15 22:43:43

Since O(1/n) will fall below the horizontal line =1, and when n reaches infinite, your code will still execute a given number of steps, this algorithm is an O(1) algorithm. Big-O notation is a function of all the different parts of the algorithm, and it picks the biggest one. Since the method will always run some of the instructions, when n reaches infinite, you're left with those same instructions executing every time, and thus the method will then run in constant time. Granted, it won't be much time, but that's not relevant to Big-O notation.

Lasse V. Karlsen
2010-01-23 23:55:22
+1
A:

In numerical analysis, approximation algorithms should have sub-constant asymptotic complexity in the approximation tolerance.

```
class Function
{
public double[] ApproximateSolution(double tolerance)
{
// if this isn't sub-constant on the parameter, it's rather useless
}
}
```

280Z28
2010-01-24 00:12:54

@280Z28: do you really mean sub-constant, or sublinear? Why should approximation algorithms be sub-constant? And what does that even mean??

LarsH
2010-09-29 05:45:10
A:

There are sub-linear algorithms. In fact, the Bayer-Moore search algorithm is a very good example of one.

Esteban Araya
2010-01-24 00:23:30

A:

As has been pointed out, apart from the possible exception of the null function, there can be no `O(1/n)`

functions, as the time taken will have to approach 0.

Of course, there are some algorithms, like that defined by Konrad, which seem like they should be less than `O(1)`

in at least some sense.

```
def get_faster(list):
how_long = 1/len(list)
sleep(how_long)
```

If you want to investigate these algorithms, you should either define your own asymptotic measurement, or your own notion of time. For example, in the above algorithm, I could allow the use of a number of "free" operations a set amount of times. In the above algorithm, if I define t' by excluding the time for everything but the sleep, then t'=1/n, which is O(1/n). There are probably better examples, as the asymptotic behavior is trivial. In fact, I am sure that someone out there can come up with senses that give non-trivial results.

Casebash
2010-07-09 11:19:32

+2
A:

many people have had the correct answer (No) Here's another way to prove it: In order to have a function, you have to call the function, and you have to return an answer. This takes a certain constant amount of time. EVEN IF the rest of the processing took less time for larger inputs, printing out the answer (Which is we can assume to be a single bit) takes at least constant time.

Brian Postow
2010-07-09 19:29:46

A:

I had such a doubt way back in 2007, nice to see this thread, i came to this thread from my reddit thread -> http://www.reddit.com/r/programming/comments/d4i8t/trying_to_find_an_algorithm_with_its_complexity/

hemanth.hm
2010-08-24 14:09:27