views:

106

answers:

2
#include<stdio.h>

main()
{
      long long int m,n,i,j;
      int t; 
      scanf("%d",&t);
      while(t--)
      {

      scanf("%lld",&m);
      scanf("%lld",&n);
      long long int a[n+2];
      for(i=0;i<=n;i++)
      {
       a[i]=1;
      }
      for(i=2;i<=sqrt(n);i++)
      {
            j=2;
            while((i*j)<=n)
            {
                 a[i*j]=0;
                 j++;
            }
      }
      for(i=m;i<=n;i++)
      {
            if(i==1) 
               continue;           
            if(a[i]!=0)
                printf("%lld\n",i);
      }
      }
      return 0;
}
+1  A: 
  1. Don't do this: for(i=2;i<=sqrt(n);i++). Calculating sqrt(n) at each iteration of the loop is inefficient. Your compiler might optimize it, but it's still bad practice. Save the value in a variable and iterate using that.
  2. long long int a[n+2]; This can segfault if n is big enough. Use long long int *a = new long long int[n + 2] instead. Also take this OUT of the while loop, as allocating memory is slow and you only need to do it once. Allocate the max value of n outside the loop. Only leave the initialization inside the while loop. Since n can be as large as one billion, you might want to reconsider your algorithm, as anything will either segfault or exceed the memory limit. Hint: don't use the sieve for the entire range. The naive algorithm, with some optimizations, is enough to solve this problem.

Also, if I recall correctly, you don't need long long for this problem. int will be enough, since int is 32 bit and long long 64 on the SPOJ system.

Try this and post back if your program still segfaults.

IVlad
+4  A: 
  long long int a[n+2];

According to SPOJ, n can be as large as 1,000,000,000. The syntax

 type var_name[non_const_expr];

will create a variable-length array (VLA) on the stack. This is a C99 feature but I guess your compiler provides it to C++ as well as an extension.

The problem is with this declaration you expect to allocate 8 Gigabytes on the stack (!). I don't think any OS by default supports Gigabyte-sized stacks yet.

On-stack memory allocation is usually implemented simply by moving the stack pointer. The large offset could move the stack pointer out of range (i.e. a stack overflow) to somewhere the process should not access — causing a segmentation fault.

You could try to allocate on the heap with std::vector:

std::vector<long long int> a (n+2);

but it still takes a lot of RAM. Try to rethink about your algorithm.

KennyTM
Oh, wow. I forgot `n` can be THAT large. Yes, in that case a better algorithm is in order.
IVlad