views:

1595

answers:

15

What is the clearest explanation of what computer scientists mean by "the naive implementation"? I need a good clear example which will illustrate -- ideally, even to non-technical people -- that the naive implementation may technically be a functioning solution to the problem, but practically be utterly unusable.

A: 

Bubble sort over 100,000 thousand entries.

Joshua
I disagree. Bubble sort is the best when input is already semi-sorted. When the list's already sorted, bubble will finish after just one step, whereas this is exactly quick sort's worst possible case. The real naïveté is to believe "one sorting algorithm fits all situations".
Joe Pineda
Not the bubble sort I know. It does something likefor(int i = 0; i < n - 1; i++) for (int j = i; j < n; j++) if (x[i] > x[j]) { t = x[i]; x[i] = x[j]; x[j] = t; }
Joshua
Yours is the most naïve implementation of Bubble Sort!! Look at http://en.wikipedia.org/wiki/Bubble_sort - they do stop if after a step no two elements were swapped (ergo, list is already ordered). And then variant which alternates going up and then down, (sand and bubbles), is O(n^2 / 2).
Joe Pineda
A: 

A O(n!) algorithm.

foreach(object o in set1)
{
    foreach(object p in set1)
    {
        // codez
    }
}

This will perform fine with small sets and then exponentially worse with larger ones.

Another might be a naive Singleton that doesn't account for threading.

public static SomeObject Instance
{
     get
     {
          if(obj == null)
          {
               obj = new SomeObject();
          }

          return obj;
     }
}

If two threads access that at the same time it's possible for them to get two different versions. Leading to seriously weird bugs.

Quibblesome
The singleton example certainly isn't suitable for a non-technical audience, and it's not clear what your first example is about (or why it's n! rather than n^2).
Jon Skeet
I also can't see the n! there. Both goes for set1.Count times => O(n^2)
lacop
:(. I am teh fail.... to be honest I thought that n! was n*2, damn my lack of education! But now i'm intrigued to create a n! alg just to see what one looks like. I guess i'll try a brute force traveling salesman. Thanks for the correction! :)
Quibblesome
or simply the naieve Fibonacci algorithm is pretty close to n!
Chii
+2  A: 

Doing it the most straightforward, least tricky way available. One example is selection sort.

In this case naive does not mean bad or unusable. It just means not particularly good.


Taking Jon Skeet's advice to heart you can describe selection sort as:

  1. Find the highest value in the list and put it first
  2. Find the next highest value and add it to the list
  3. Repeat step 2 until you run out of list

It is easy to do and easy to understand, but not necessarily the best.

dmckee
+1  A: 

Determining if a number is prime or not (primality test) is an excellent example.

The naive method just check if n mod x where x = 2..square root(n) is zero for at least one x. This method can get really slow for very large prime numbers and it is not feasible to use in cryptography.

On the other hand there are a couple of probability or fast deterministic tests. These are too complicated to explain here but you might want to check the relevant Wikipedia article on the subject for more information: http://en.wikipedia.org/wiki/Primality_test

DrJokepu
+1  A: 

Let's say that someone figures out how to extract a single field from a database and then proceeds to write a web page in PHP or any language that makes a separate query on the database for each field on the page. It works, but will be incredibly slow, inefficient, and difficult to maintain.

postfuturist
+55  A: 

I'd try to keep it away from computers altogether. Ask your audience how they find an entry in a dictionary. (A normal dictionary of word definitions.)

The naive implementation is to start at the very beginning, and look at the first word. Oh, that's not the word we're looking for - look at the next one, etc. It's worth pointing out to the audience that they probably didn't even think of that way of doing things - we're smart enough to discount it immediately! It is, however, about the simplest way you could think of. (It might be interesting to ask them whether they can think of anything simpler, and check that they do really understand why it's simpler than the way we actually do it.)

The next implementation (and a pretty good one) is to start in the middle of the dictionary. Does the word we're looking for come before or after that? If it's before, turn to the page half way between the start and where we are now - otherwise, turn to the page half way between where we are now and the end, etc - binary chop.

The actual human implementation is to use our knowledge of letters to get very rapidly to "nearly the right place" - if we see "elephant" then we'll know it'll be "somewhere near the start" maybe about 1/5th of the way through. Once we've got to E (which we can do with very, very simple comparisons) we find EL etc.

Jon Skeet
There is a chance to explain the limits of non-naive algorithms here too. If the list isn't sorted the "smart" method won't work, but the naive one will... Good answer, BTW.
dmckee
Meh. A naive implementation is usually intuitive or the first that comes to mind. Your analogy fails here, as you said yourself.
FogleBird
Note that the simple implementation is more robust, because it uses less information: the dictionary doesn't need to be in alphabetic order.
reinierpost
+6  A: 

StackOverflow's Jeff Atwood had a great example of a naive algorithm related to shuffling an array.

Gareth
To be fair, my linked example isn't a particularly non-technical one. But it's still my favourite demonstration.
Gareth
A: 

Naive doesn't mean bad or unusable - it means having certain qualities which pose a problem in a specific context and for a specific purpose.

The classic example of course is sorting. In the context of sorting a list of ten numbers, any old algorithm (except pogo sort) would work pretty well. However, when we get to the scale of thousands of numbers or more, typically we say that selection sort is the naive algorithm because it has the quality of O(n^2) time which would be too slow for our purposes, and that the non-naive algorithm is quicksort because it has the quality of O(n lg n) time which is fast enough for our purposes.

In fact, the case could be made that in the context of sorting a list of ten numbers, quicksort is the naive algorithm, since it will take longer than selection sort.

Justice
I would take issue with the idea that naivete is about performance. It is, IMHO of course, only about how hard you have to think.
dmckee
Performance need not be the criterion whereby one judges naivety. Typically, achieving good results in a more particular context will require harder thinking. Take the example of a PEG. A more complex ("non-naive") caching algorithm lets us define left-recursive grammars.
Justice
I totally disagree with the last paragraph: naive doesn't mean bad or slow, it means straightforward. It *could* be that a naive solution is actually excellent, performancewise. Quicksort is never straightforward, no matter how you look at it.
bart
Quicksort is very straightforward. In functional languages, quicksort is a one-liner, while selection-sort is a more complicated algorithm. In-place-quicksort is typically far less straightforward than in-place-selection-sort, however.
Justice
A: 

The intuitive algorithms you normally use to sort a deck of cards (insertion sort or selection sort, both O(n^2)) can be considered naive, because they are easy to learn and implement, but would not scale well to a deck of, say, 100000 cards :D . In a general setting, there are faster (O(n log n)) ways to sort a list.

Note, however, that naive does not necessarily mean bad. There are situations where insertion sort is a good choice (say, when you have an already sorted big deck and few unsorted cards to add).

Federico Ramponi
A: 

(Haven't seen a truly naive implementation posted yet so...)

The following implementation is "naive", because it does not cover the edge cases, and will break in other cases. It is very simple to understand, and can convey a programming message.

   def naive_inverse(x):
        return 1/x

It will:

  • Break on x=0
  • Do a bad job when passed an integer

You could make it more "mature" by adding these features.

Ali A
I'd argue that is just a "broken" implementation, not a "naive" implementation... not quite the same
Marc Gravell
+1  A: 

another naive implementation would be the use of recursion in computing for an integer's factorial in an imperative language. a more efficient solution in that case is to just use a loop.

cruizer
It could be worse: try implementing Ackermann's function using recursion and without memoization. Then, try to calculate A(6,6) - ah, that's not just naïveté, but also masochism of the worst possible kind :-)
Joe Pineda
+1  A: 

What's the most obvious, naive algorithm for exponentiation that you could think of?

base ** exp is base * base * ... * base, exp times:

double pow(double base, int exp) {
    double result = 1;
    for (int i = 0; i < exp; i++)
        result *= base;
    return result;
}

It doesn't handle negative exponents, though. Remembering that base ** exp == 1 / base ** (-exp) == (1 / base) ** (-exp):

double pow(double base, int exp) {
    double result = 1;
    if (exp < 0) {
        base = 1 / base;
        exp = -exp;
    }
    for (int i = 0; i < exp; i++)
        result *= base;
    return result;
}

It's actually possible to compute base ** exp with less than exp multiplications, though!

double pow(double base, int exp) {
    double result = 1;
    if (exp < 0) {
        base = 1 / base;
        exp = -exp;
    }
    while (exp) {
        if (exp % 2) {
            result *= base;
            exp--;
        }
        else {
            base *= base;
            exp /= 2;
        }
    }
    return result * base;
}

This takes advantage of the fact that base ** exp == (base * base) ** (exp / 2) if exp is even, and will only require about log2(exp) multiplications.

ephemient
+2  A: 

I took the time to read your question a little closer, and I have the perfect example.

a good clear example which will illustrate -- ideally, even to non-technical people -- that the naive implementation may technically be a functioning solution to the problem, but practically be utterly unusable.

Try Bogosort!

If bogosort were used to sort a deck of cards, it would consist of checking if the deck were in order, and if it were not, one would throw the deck into the air, pick up the cards up at random, and repeat the process until the deck is sorted.

ephemient
+1  A: 

"Naive implementation" is almost always synonymous with "brute-force implementation". Naive implementations are often intuitive and the first to come to mind, but are also often O(n^2) or worse, thus taking too long too be practical for large inputs.

Programming competitions are full of problems where the naive implementation will fail to run in an acceptable amount of time, and the heart of the problem is coming up with an improved algorithm that is generally much less obvious but runs much more quickly.

FogleBird
A: 

Naive implementation is:

  • intuitive;
  • first to come in mind;
  • often inffective and/or buggy incorner cases;
lbownik