views:

2815

answers:

10

Link to the original problem

It's not a homework question. I just thought that someone might know a real solution to this problem.

I was on a programming contest back in 2004, and there was this problem:

Given n, find sum of digits of n!. n can be from 0 to 10000. Time limit: 1 second. I think there was up to 100 numbers for each test set.

My solution was pretty fast but not fast enough, so I just let it run for some time. It built an array of pre-calculated values which I could use in my code. It was a hack, but it worked.

But there was a guy, who solved this problem with about 10 lines of code and it would give an answer in no time. I believe it was some sort of dynamic programming, or something from number theory. We were 16 at that time so it should not be a "rocket science".

Does anyone know what kind of an algorithm he could use?

EDIT: I'm sorry if I didn't made the question clear. As mquander said, there should be a clever solution, without bugnum, with just plain Pascal code, couple of loops, O(n2) or something like that. 1 second is not a constraint anymore.

I found here that if n > 5, then 9 divides sum of digits of a factorial. We also can find how many zeros are there at the end of the number. Can we use that?

Ok, another problem from programming contest from Russia. Given 1 <= N <= 2 000 000 000, output N! mod (N+1). Is that somehow related?

+2  A: 

1 second? Why can't you just compute n! and add up the digits? That's 10000 multiplications and no more than a few ten thousand additions, which should take approximately one zillionth of a second.

mquander
In many languages native integer types will overflow very quickly. Plus, more elegant and efficient solutions undoubtedly exist.
Chris Lutz
Of course, but there are also many languages with native support for bignums. Anyway, I'm sure there's a clever solution, but the question must be mistaken about the constraints somehow, or else this would be plenty simple.
mquander
Even in Python, with native bignum support, a naive (unmemoized) implementation hits the recursion depth after a few seconds. It never gets to 10000 as required.
Chris Lutz
That's exactly what I did. We used Pascal, so no BigInt. I had to manually write everything using arrays. It wasn't fast enough. It took about 10 seconds. Now, 5 years later, java solution runs in 800ms.
tulskiy
I didn't understand that you meant calculate it for every N from 1 to 10000. I agree, that may well take longer than a second in many environments.
mquander
@Chris Lutz: You don't have to compute a factorial recursively. Just use a for loop and you're safe from stack overflows.
Pillsy
I think we can safely assume the author of this problem did not have in mind "all you have to do is use BigInt!". There's some trick here.
DanM
@Pillsy: Lutz is not talking about stack overlows. He's talking about integer overflows. 100! will overflow both 32 and 64 bit unsigned integer types.
abelenky
+1  A: 

Small, fast python script found at http://www.penjuinlabs.com/blog/?p=44. It's elegant but still brute force.

import sys
for arg in sys.argv[1:]:
    print reduce( lambda x,y: int(x)+int(y), 
          str( reduce( lambda x, y: x*y, range(1,int(arg)))))

 

$ time python sumoffactorialdigits.py 432 951 5436 606 14 9520
3798
9639
74484
5742
27
141651

real    0m1.252s
user    0m1.108s
sys     0m0.062s
mobrule
I believe `lambda x, y: x * y` can be changed to `operator.mul` and it will be faster because it will directly use the built-in multiplication operator rather than indirectly using it through a lambda. Same goes for `lambda x, y: x + y` and `operator.add`
Chris Lutz
`assert n > 0; sum(map(int, str(reduce(operator.mul, range(1, n+1)))))` is slightly more digestible. Note: `+1`.
J.F. Sebastian
+3  A: 

This is A004152 in the Online Encyclopedia of Integer Sequences. Unfortunately, it doesn't have any useful tips about how to calculate it efficiently - its maple and mathematica recipes take the naive approach.

Nick Johnson
+2  A: 

Assume you have big numbers (this is the least of your problems, assuming that N is really big, and not 10000), and let's continue from there.

The trick below is to factor N! by factoring all n<=N, and then compute the powers of the factors.

Have a vector of counters; one counter for each prime number up to N; set them to 0. For each n<= N, factor n and increase the counters of prime factors accordingly (factor smartly: start with the small prime numbers, construct the prime numbers while factoring, and remember that division by 2 is shift). Subtract the counter of 5 from the counter of 2, and make the counter of 5 zero (nobody cares about factors of 10 here).

Edit

compute all the prime number up to N, run the following loop

for (j = 0; j< last_prime; ++j) {
  count[j] = 0;
  for (i = N/ primes[j]; i; i /= primes[j])
    count[j] += i; 
}

end of Edit

Note that in the previous block we only used (very) small numbers.

For each prime factor P you have to compute P to the power of the appropriate counter, that takes log(counter) time using iterative squaring; now you have to multiply all these powers of prime numbers.

All in all you have about N log(N) operations on small numbers (log N prime factors), and Log N Log(Log N) operations on big numbers.

Edit

and after the improvement in the edit, only N operations on small numbers.

end of Edit

HTH

David Lehavi
...how does this help to find the sum of digits?
Jason S
the problem is to compute the number by having few bignum multiplications (these are expansive)
David Lehavi
+3  A: 

I'd attack the second problem, to compute N! mod (N+1), using Wilson's theorem. That reduces the problem to testing whether N is prime.

Jitse Niesen
So if N+1 is prime, then N! mod N+1 is N. If N+1 is composite and N > 4, then N! mod N+1 is 0. Cases 0 <= N <= 4 are easily handled seperately. Pretty cool!
Joren
Oh man, I think I came across Wilson's theorem somewhere but was confused about how to interpret the notation `(N - 1)! ≡ -1 (mod n)`. Looking it up, I now see that it means: `(N - 1)! mod n ≡ -1 mod n` if and only if `n` is prime (Note: `≡` means "congruent").
DanM
So, this means that if `n = 10000` then `n + 1 = 10001`, which is not prime (it's `73 * 137`), so `10000! mod 10001 = 0`. This means that if we divide `10000!` by `10001`, there is no remainder. That's cool, but now what? I don't know how to make the jump from this to getting the sum of digits of `10000!`.
DanM
@DanThMan: I thought this problem was related to the first one. Turns out it's not.@Jitse Niesen: thanks for the theorem.
tulskiy
A: 

Let's see. We know that the calculation of n! for any reasonably-large number will eventually lead to a number with lots of trailing zeroes, which don't contribute to the sum. How about lopping off the zeroes along the way? That'd keep the sizer of the number a bit smaller?

Hmm. Nope. I just checked, and integer overflow is still a big problem even then...

Mark Bessey
Okay, a quick check seems to show that the sum of the digits of n! is always divisible by 3 (at least for n <= 23). That's got to be useful somehow.
Mark Bessey
It is also divisible by 9
tulskiy
About 7% of the digits are trailing 0s. It does help to ignore them, but not all that much.
Greg Kuperberg
A: 

You have to compute the fatcorial.

1 * 2 * 3 * 4 * 5 = 120.

If you only want to calculate the sum of digits, you can ignore the ending zeroes.

For 6! you can do 12 x 6 = 72 instead of 120 * 6

For 7! you can use (72 * 7) MOD 10

EDIT.

I wrote a response too quickly...

10 is the result of two prime numbers 2 and 5.

Each time you have these 2 factors, you can ignore them.

1 * 2 * 3 * 4 * 5 * 6 * 7 * 8 * 9 * 10 * 11 * 12 * 13 * 14 * 15...

1   2   3   2   5   2   7   2   3    2   11    2   13    2    3
            2       3       2   3    5         2         7    5
                            2                  3

The factor 5 appears at 5, 10, 15...
Then a ending zero will appear after multiplying by 5, 10, 15...

We have a lot of 2s and 3s... We'll overflow soon :-(

Then, you still need a library for big numbers.

I deserve to be downvoted!

Luc M
@Luc, I went through half of your thought process. Got rid of all zeroes. Didn't make much difference, as you say. It overflowed after 60-something factorial :(
DanM
A: 

Even without arbitrary-precision integers, this should be brute-forceable. In the problem statement you linked to, the biggest factorial that would need to be computed would be 1000!. This is a number with about 2500 digits. So just do this:

  1. Allocate an array of 3000 bytes, with each byte representing one digit in the factorial. Start with a value of 1.
  2. Run grade-school multiplication on the array repeatedly, in order to calculate the factorial.
  3. Sum the digits.

Doing the repeated multiplications is the only potentially slow step, but I feel certain that 1000 of the multiplications could be done in a second, which is the worst case. If not, you could compute a few "milestone" values in advance and just paste them into your program.

One potential optimization: Eliminate trailing zeros from the array when they appear. They will not affect the answer.

OBVIOUS NOTE: I am taking a programming-competition approach here. You would probably never do this in professional work.

PeterAllenWebb
A: 

On my AMD 6000+ X2 with this code:

#include <ctime>
#include <iostream>

using namespace std;

#define MAX 10000

unsigned long long int fac(int);

int main (void){
    clock_t start;
    start = clock();


    unsigned long long int sum = 0;

    for(int i = 0; i < MAX; i++){
     sum += fac(i);
    }

    double diff = ( std::clock() - start ) / (double)CLOCKS_PER_SEC;
    cout << sum << "  Time:  " << diff <<'\n';

    return 0;
}

unsigned long long int fac(int a){
    unsigned long long int result = a;
    for(a -= 1; a > 1; a--){
     result *= a;
    }

    return result;
}

I get the result of: 1005876315485501977 Time: 0.272

DShook
you're answering the wrong question. We want the sum of the decimal digits in each factorial. There should be a way to do it without extended precision.
Peter Cordes
+4  A: 

I'm not sure who is still paying attention to this thread, but here goes anyway.

First, in the official-looking linked version, it only has to be 1000 factorial, not 10000 factorial. Also, when this problem was reused in another programming contest, the time limit was 3 seconds, not 1 second. This makes a huge difference in how hard you have to work to get a fast enough solution.

Second, for the actual parameters of the contest, Peter's solution is sound, but with one extra twist you can speed it up by a factor of 5 with 32-bit architecture. (Or even a factor of 6 if only 1000! is desired.) Namely, instead of working with individual digits, implement multiplication in base 100000. Then at the end, total the digits within each super-digit. I don't know how good a computer you were allowed in the contest, but I have a desktop at home that is roughly as old as the contest. The following sample code takes 16 milliseconds for 1000! and 2.15 seconds for 10000! The code also ignores trailing 0s as they show up, but that only saves about 7% of the work.

#include <stdio.h>
int main() {
    unsigned int dig[10000], first=0, last=0, carry, n, x, sum=0;
    dig[0] = 1;    
    for(n=2; n <= 9999; n++) {
        carry = 0;
        for(x=first; x <= last; x++) {
            carry = dig[x]*n + carry;
            dig[x] = carry%100000;
            if(x == first && !(carry%100000)) first++;
            carry /= 100000; }
        if(carry) dig[++last] = carry; }
    for(x=first; x <= last; x++)
        sum += dig[x]%10 + (dig[x]/10)%10 + (dig[x]/100)%10 + (dig[x]/1000)%10
            + (dig[x]/10000)%10;
    printf("Sum: %d\n",sum); }

Third, there is an amazing and fairly simple way to speed up the computation by another sizable factor. With modern methods for multiplying large numbers, it does not take quadratic time to compute n!. Instead, you can do it in O-tilde(n) time, where the tilde means that you can throw in logarithmic factors. There is a simple acceleration due to Karatsuba that does not bring the time complexity down to that, but still improves it and could save another factor of 4 or so. In order to use it, you also need to divide the factorial itself into equal sized ranges. You make a recursive algorithm prod(k,n) that multiplies the numbers from k to n by the pseudocode formula

prod(k,n) = prod(k,floor((k+n)/2))*prod(floor((k+n)/2)+1,n)

Then you use Karatsuba to do the big multiplication that results.

Even better than Karatsuba is the Fourier-transform-based Schonhage-Strassen multiplication algorithm. As it happens, both algorithms are part of modern big number libraries. Computing huge factorials quickly could be important for certain pure mathematics applications. I think that Schonhage-Strassen is overkill for a programming contest. Karatsuba is really simple and you could imagine it in an A+ solution to the problem.


Part of the question posed is some speculation that there is a simple number theory trick that changes the contest problem entirely. For instance, if the question were to determine n! mod n+1, then Wilson's theorem says that the answer is -1 when n+1 is prime, and it's a really easy exercise to see that it's 2 when n=3 and otherwise 0 when n+1 is composite. There are variations of this too; for instance n! is also highly predictable mod 2n+1. There are also some connections between congruences and sums of digits. The sum of the digits of x mod 9 is also x mod 9, which is why the sum is 0 mod 9 when x = n! for n >= 6. The alternating sum of the digits of x mod 11 equals x mod 11.

The problem is that if you want the sum of the digits of a large number, not modulo anything, the tricks from number theory run out pretty quickly. Adding up the digits of a number doesn't mesh well with addition and multiplication with carries. It's often difficult to promise that the math does not exist for a fast algorithm, but in this case I don't think that there is any known formula. For instance, I bet that no one knows the sum of the digits of a googol factorial, even though it is just some number with roughly 100 digits.

Greg Kuperberg