tags:

views:

1620

answers:

20

I hear logarithms mentioned quite a lot in the programming context. They seem to be the solution to many problems and yet I can't seem to find a real-world way of making use of them. I've read the Wikipedia entry and that, quite frankly, leaves me none the wiser.

So, where can I learn about the real-world programming problems that logarithms solve? Has anyone got any examples of problems they faced that were solved by implementing a logarithm?

+2  A: 

The only problem I can recall is having to calculate the product of a column in SQL. SQL Server does not have a PRODUCT() aggregate function, so this was accomplished using a sum of the logarithms (using the LOG10() function) of each value. The main drawback was that all numbers in the column had to be positive and non-zero (you cannot calculate a logarithm on a negative number or zero).

Ben Hoffstein
Definitely a great transformation you mention here that can be expanded past just SQL. When products are really small you can often take the sum of the log.
nlucaroni
+8  A: 
Mendelt
I think ^ is clearer than ** to denote an exponent... to me at least
Chris Marasti-Georg
This is very interesting and exactly what I meant by "real world". In trying to grok this answer I found the following very useful: http://math.about.com/od/formulas/a/compound.htm I ended up with the following to get back to 2000: 1000*(1 + 0.024)**29.2263362061568
Charles Roper
Fixed the typo's, thanks for the feedback. @Charles Roper you're getting it now. In other words log's tell you how many times you have to multiply a number by itself to get to another number.
Mendelt
You're using inconsistent exponent operators now... :)
Charles Roper
@Charles Roper: Damn, I shouldn't edit my posts late at night when drinking wine :-)
Mendelt
I noticed the notation used for log was wrong (at least in my experience... Correct me if *I* am wrong!). Fixed that, as well as a small rounding issue.
strager
@strager: I'm actually used to a notation where the base of the log is written in superscript before log but couldn't find how to do superscript. This is better, thanks!
Mendelt
When applying this formulas to real-life savings accounts, rememember that interest rates are quoted per annum, but often the interest is compounded more often. For monthly compounding, each month you multiply by (1 + r/12).
quant_dev
+9  A: 

Logarithms in programming are also frequently used in describing the efficiency of an algorithm using Big O notation.

For instance, a binary search algorithm would have a worst case scenario of O(log(n)) (on a sorted set), whereas a linear search's worst case is O(n)

Chris Marasti-Georg
Yes of course, this is a great example. Any time you have something being split in half after each iteration, think Log!
Ben Hoffstein
+4  A: 

I've seen logarithms used for displaying tag clouds. This is a page that explains it:

Tag Cloud Font Distribution Algorithm

muloh
+2  A: 

Look this article. It's quite simple and explaining.

cleg
+2  A: 

The most obvious usage in every programming example is precision. Put simply, consider storing unsigned integers. How many bits do you need to store X? Well, the maximum value you can store in n bits is 2^n - 1, so you can need log_2 X + 1 bits to store X. Now you can pick short, int, word, long etc with ease.

Adam Wright
+4  A: 

Logs are a type meta-arithmetic. It's a way of thinking of every number as a (possibly fixed) base raised to an exponent. Operations are performed on the exponents alone. This means that you can do multiplication and division by doing addition and subtraction of the logs. In other words, you put your data into log space, perform a suite of arithmetic, and pull it back into non-log space. If the floating point precision loss and the overhead of transforming in or out of log space is cheap, then you may have an overall win in time.

One slick trick you can do with logs is calculate out the number of characters a number will take when printed by taking the log-base-2 of the number and divide it by the log-base-10(2), which is constant time compared with a set up multiplies.

plinth
+2  A: 

I assume you have heard about logarithms with contexts to time consumation.

A concrete example would be search algorithms. Given a set of ordered data (think a sorted array of int's), you want to find the index key to a value in that data. We can benefit from the fact that the array is sorted (1, 2, 6, 192, 404, 9595, 50000 for example). Let's say we we want to find the index to the value 2. We can minimize our search space by culling (ignoring) half the array each step. We start this search by testing the value at the middle of the array. There are 7 values in the array, we then make the index 7/2 = 3.5 = 3 as int. array[3] is 192. The value we are looking for is 2, therefore we want to continue the search in the lower half of the search space. We completely ignore index 4, 5, 6 since they are all higher than 192, and in turn also higher then 2. Now we have a search space that looks like (1, 2, 6). We then index into middle again (repeat process), and we find the 2 instantly. The search is complete, the index to 2 is 1.

This is a very small example, but it shows how such an algorithm works.

For 16 values, you need to search at maximum 4 times. For 32 values, you search max 5 times, 64 values 6 times and so on.. 1048576 values are searched in 20 steps. This is far quicker than having to compare each item in the array separately. Of course, this only works for sorted collections of data.

Statement
+1  A: 

Logarithms are used quite often in charts and graphs, when one or both axes cover a large range of values.

Some natural phenomena are best expressed on a logarithmic scale; some examples are sound pressure levels (SPL in dB) and earthquake magnitude (Richter scale).

Mark Ransom
+2  A: 

One example, out of many : Calculating compound interests at a very small rate with a large number of periods.

You can do it the most straightforward way, even using fast exponentiation, but accuracy may suffer, due to the way floats are stored and calculating s * r power n still takes O(ln(n)) operations.

With logarithms, it's somewhat more accurate.

A = ln( s * r power n ) = ln(s) + n * ln(r)
Two lookups in your logarithm database gives you ln(s) and ln(r), with ln(r) begin very small, and floats work at their best accuracy near 0 result = exp(A), a reverse lookup here.

It's also the only really efficient way if you work with non-integer exponents, to extract cubic roots for example.

Johan Buret
+1  A: 

Check out MIT's Open Courseware: Introduction to Algorithms. Free educations. Awesome.

Zach
Looks very interesting indeed, but does it cover logarithms?
Charles Roper
+2  A: 

I recommend e: The Story of a Number for a good foundation of the importance of logarithms, their discovery and relevance to natural phenomena.

Cade Roux
+2  A: 

Another way of looking at it is looking at the number of base multipliers in a number. I am sure you can see how this all relates in the following examples.

Decimal (base 10):

  • log10 (1) = 0 , (10^0) = 1
  • log10 (10) = 1 , (10^1) = 10
  • log10 (100) = 2 , (10^2) = 100
  • log10 (1000) = 3 , (10^3) = 1000
  • log10 (10000) = 4 , (10^4) = 10000
  • log10 (100000) = 5 , (10^5) = 100000

Binary (base 2):

  • log2 (1) = 0 , (2^0) = 1
  • log2 (2) = 1 , (2^1) = 2
  • log2 (4) = 2 , (2^2) = 4
  • log2 (8) = 3 , (2^3) = 8
  • log2 (16) = 4 , (2^4) = 16
  • log2 (32) = 5 , (2^5) = 32
  • log2 (64) = 6 , (2^6) = 64
  • log2 (128) = 7 , (2^7) = 128

Hexadecimal (base 16):

  • log16 (1) = 0 , (16^0) = 1
  • log16 (16) = 1 , (16^1) = 16
  • log16 (256) = 2 , (16^2) = 256
  • log16 (4096) = 3 , (16^3) = 4096
  • log16 (65536) = 4 , (16^4) = 65536

If you want to think in variables:

  • log N (X) = Y
  • (N^Y) = X
Statement
+3  A: 

Many (many!) relationships in the real world are logarithmic. For instance, it would not surprise me if the distribution of reputation scores on Stack Overflow is log normal. The vast majority of users will have reputation scores of 1 and a handful of people will have unattainably high reputation. If you apply a logarithmic transformation to that distribution, it would likely be nearly a linear relation. A quick scan of http://stackoverflow.com/users?page=667 shows this to be true.

You might be more familiar with The Long Tail concept, which is an application of the logarithmic distribution.

Jon Ericson
+2  A: 

One of the more "cool" applications of logarithms I've found is Spiral Storage. It's a hash table that allows you to split one bucket at a time as the table grows, relocating less than half of the records in that bucket to the same, new bucket. Unlike linear hashing, where the performance varies cyclically and all of the buckets tend to be split at around the same time, spiral hashing allows nice, smooth growth of the table.

It was published about 30 years ago by G. N. N. Martin, about whom I haven't been able to learn much besides the fact that he also invented Range Encoding. Seems like a smart guy! I haven't been able to get a copy of his original paper, but Per-Åke Larson's paper "Dynamic hash tables" has a very clear description.

erickson
+1  A: 

As an example of what Chris is talking about, an algorithm that changes complexity based on the number of bits in a value is (probably) going to have an efficiency described by O(log(n)).

Another everyday example of exponents (and hence logarithms) is in the format of IEEE floating point numbers.

Andrew Edgecombe
+1  A: 

A logarithmic function is simply the inverse of an exponential function, in the same sense that subtraction is the inverse of addition. Just as this equation:

a = b + c

states the same fact as this equation:

a - c = b

this equation:

b ** p = x

(where ** is raising to a power) states the same fact as this equation:

log [base b] (x) = p

Although b can be any number (e.g. log [base 10] (10,000) = 4) the "natural" base for Mathematics is e (2.718281828...) about which see here.

"Common" logarithms, used more in engineering, use a base of 10. A quick-and-dirty (emphasis on dirty) interpretation of the common (base 10) logarithm of some number x is that it is one less than the number of decimal digits required to express numbers the size of x.

joel.neely
+3  A: 

In my own research I came upon a few useful resources:

Ruby Quiz #105: Tournament Matchups

This article contains a good example of using a base 2 log to determine the number of rounds required to complete a knock-out tournament given x teams.

An Intuitive Guide To Exponential Functions & E

An excellent, intuitive (as you'd expect, given the title), guide to e, the base of the natural logarithm. Lots of illustrations and examples make this a gem of an article.

Demystifying the Natural Logarithm (ln)

This is the followup to the article to one about e and discusses the natural logarithm (ln) which, using the intuitive explanation given in the article, "gives you the time needed to reach a certain level of growth".

There's actually loads of good content on the Better Explained site. Truly, a splendid resource.

Another tool that I had actually come across before but since completely forgotten about is Instacalc. It seems to be by the same person - Kalid Azad - who authors the Better Explained site. It's a really useful tool when hacking about with maths.

Charles Roper
A: 

Demystifying the Natural Logarithm (ln) at BetterExplained is the best i have found. It clears the concepts from the base and help you understand the underlying concepts. After that everything seems a cakewalk.

Navneet
+1  A: 

Here are some sites that I have used:

I have used logarithms for calculating the yearly appreciation on a house to determine whether the seller was being fair.

House Appreciation Equations

Here is the basic equation:

  • Previous price = p
  • New price = n
  • Appreciation rate = r
  • Years of appreciation = y

p * (1 + r)^y = n

So, if the price 6 years ago was $191,000 (by checking your couty auditor's site) and the asking price is $284,000, what is the appreciation rate (which would not take into account any one-time improvement costs)?

191,000 * (1 + r)^6 = 284,000

(1 + r)^6 = 284,000 / 191,000 = 1.486

Using a property of exponents and logarithms…

6 ( log (1 + r) ) = log 1.486
log (1 + r) = (log 1.486) / 6 = 0.02866

Using another property of exponents and logarithms…

10 0.02866 = 1 + r
1.068 = 1 + r
r = 1.068 – 1 = 0.068 = 6.8%  (kind of high!)

To determine what a reasonable price would be…use 4% and allow for whatever improvements they made (which should be listed on the web id they were major…but it wouldn’t include bathroom/kitchen remodeling, etc.)

191,000 * (1 + 0.04)^6 = n
n = 241,675 + reasonable cost of improvement 
which of course will depreciate over time 
and should not represent 100% of the 
cost of the improvement