views:

236

answers:

8

My os and drive is

OS: Windows XP sp2 or Linux SUSE 9 or Cygwin
Compiler: Visual C++ 2003 or Gcc or Cygwin
PC and os are both 32 bits

So, How can I create a super-huge file in secs

I was told to use MappingFile functions. I failed to create files over 2G So... Your warm responses will be all appreciated thanks

+3  A: 

you can use dd

dd if=/dev/zero of=bigfile.txt bs=$((1024*1024)) count=100

or just plain shell scripting and contents doesn't matter

1) create a dummy file with a few lines, eg 10 lines
2) use cat dummy >> bigfile in a while loop

eg

    while true
    do
      cat dummy >> bigfile.txt 
      #check for number of lines more than 10000 for example and break out of loop
    done

Do another time

while true
do
  cat bigfile >> bigfile2.txt           
  # check for size and break out..
done
rm -f dummy bigfile
`/dev/zero` is likely to be faster than `/dev/random`, and easier for whatever is consuming the file to deal with.
Stephen C
Agreed, /dev/random can block, if psuedo random content is needed its better to use /dev/urandom.
Tim Post
+7  A: 

Using dd in Linux to create a 1 gb file takes 57 seconds 'wall clock time' on a somewhat loaded box with a slow disk, and about 17 seconds system time:

$ time dd if=/dev/zero of=bigfile bs=G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 53.9903 s, 19.9 MB/s

real    0m56.685s
user    0m0.008s
sys     0m17.113s
$
Dirk Eddelbuettel
+2  A: 

Does the file have to take up actual disk space? If not, you could always (in Cygwin, or Linux):

dd if=/dev/zero of=bigfile seek=7T bs=1 count=1

This will create an empty 7 TB file in a fraction of a second. Of course, it won't allocate much actual disk space: You'll have a big sparse file.

Writing a program under Cygwin or Linux, you can do the same thing in a C program with a call to ftruncate.

Managu
Couldn't find a great man page quickly. In Linux, calling ftruncate with a size larger than the file always extends the file's size. Pretty sure Cygwin adheres to this.
Managu
Even if it didn't, it's fairly easy to use a pair of `fseek(HUGE)` and `fwrite(a byte)` to achieve the same thing.
Adam Rosenfield
+2  A: 

Depending on your system limits you can create a largefile in a fraction of a second...

FILE *fp = fopen("largefile" ,"w");
for(int i = 0; i < 102400; i++)
{
    fseek(fp, 10240000, SEEK_CUR);
}
fprintf(fp, "%c", 'x');
fclose(fp);

Play with this.

Murali VP
is it a little time-consuming?
Macroideal
+1  A: 

cat /dev/urandom >> /home/mybigfile

It will error out when disc space has ran out.

This is for linux/bsd/possibly cgywin

OGe
+1  A: 

In suse in a VM I did dd if=/dev/zero of=file;rm file which filled the disk, and when it was full deleted the file. This allowed me to compress the image a little more for some reason, and I read about doing it on a forum somewhere.

dlamblin
A: 

If you want a sparse file you can do that also on Windows (on NTFS volumes), using CreateFile, DeviceIOFunction with FSCTL_SET_SPARSE and FSCTL_SET_ZERO_DATA; for more info see here.

Matteo Italia
A: 

Hello,

You can use "fsutil" command for Win2000/XP/7:

c:> fsutil file createnew Usage : fsutil file createnew Eg : fsutil file createnew C:\testfile.txt 1000

Reagrds

opal