tags:

views:

134

answers:

3

what is the most efficient way to calculate the least common multiple of two integers

I just came up with this, it definitely leaves something to be desired

    int n = 7, m = 4, n1=n, m1=m;

    while (m1 != n1) {
    if (m1 > n1)n1 += n;
    else m1 += m;
    }

    System.out.println("lcm is " + m1);
A: 

Take successive multiples of the larger of the two numbers until the result is a multiple of the smaller.

this might work..

   public int LCM(int x, int y)
   {
       int larger  = x>y? x: y,
           smaller = x>y? y: x,
           candidate = larger ;
       while (candidate % smaller  != 0) candidate += larger ;
       return candidate;
   }
Charles Bretana
This will work okay for small values of x and y, it will have difficulty scaling.
andand
+17  A: 

The least common multiple of a and b is the product divided by the greatest common divisor. I.e. lcm(a, b) = ab/gcd(a, b). So the question becomes how to find the gcd. The Euclidean algorithm is generally how the gcd is computed. The direct implementation of the classic algorithm is efficient, but there are variations that take advantage of binary arithmetic to do a little better. See Knuth TAOCP volume 2, Seminumerical Algorithms.

John D. Cook
Yes, LCM using GCD is fast and easy to code. One small but important detail: in order to avoid overflows, calculate the final result like this: `lcm = a / gcd * b` instead of `lcm = a * b / gcd`.
Bolo
@Bolo - if you are "worried" about overflow, you should be using `long` or in other circumstance even `BigInteger`. The LCM of two `int` values may be a `long`.
Stephen C
@Stephen C With Bolo's approach the LCM can be computed without overflow if it can be represented. There is no need to use a bigger and slower number type just for the multiplication.
starblue
@starblue - but conversely, there is nothing in the question that the LCM *can* be represented as an `int`. And we know for a fact that for certain values of `m` and `n` it cannot. My point is, that if you worry about overflow in the calculation you should *also* worry about overflow in the final result.
Stephen C
@Stephen C It may happen that the two input integers are of order O(N) and their LCM is of order O(N). In the original approach the intermediate result is of order O(N^2), while in the modified one it's only O(N). Example: p = 2^31 - 1 = 2147483647, m = 2*p, n = 3*p. Their LCM = 6*p, these are not very large numbers (`long` can represent integers up to 2^63 - 1 = 9223372036854775807), but the original approach will overflow anyway (the intermediate value is 6*p*p). A simple reordering can greatly improve the algorithm's applicability, regardless of the type (`short`, `int`, or `long`).
Bolo
@Bolo - true, but *also* changing the type of the result to `long` will increase its applicability *even further*. It is not hard to find coprime `m` and `n` that are less than `2^31` and `LCM(m, n)` (i.e. `m * n`) is greater than `2^31`.
Stephen C
@Stephen C Of course for fixed size integers the result of most operations can overflow. That's a given and one always has to choose an appropriately sized number type so that overflow doesn't occur. The point is that with Bolo's approach the internal computation for the LCM doesn't overflow.
starblue
@starblue - I know that. I accept that. I said that I accepted that. What is your point?
Stephen C
@Stephen C - but you kept advocating for using long and seem to pooh-pooh Bolo's very important improvement which avoids many overflow cases. Bolo was right.
Heath Hunnicutt
@Heath - if you read the entire thread of comments, it should be absolutely clear that I was not pooh-poohing @Bolo's idea. For example my last response to @Bolo. Read it carefully.
Stephen C
+2  A: 

I think that the approach of "reduction by the greatest common divider" should be faster. Start by calculating the GCD (e.g. using Euclid's algorithm), then divide the product of the two numbers by the GCD.

Stephen C