views:

329

answers:

7

Let me preface this by saying that I'm pretty new to Java.

I have a file that contains a single line. The size of the file is about 200MB. I need to insert a newline character after every 309th character. I believe I have the code to do this properly, but I keep running into memory errors. I've tried increasing the heap space to no avail.

Is there a less memory-intensive way of handling this?

BufferedReader r = new BufferedReader(new FileReader(fileName));

String line;

while ((line=r.readLine()) != null) {
  System.out.println(line.replaceAll("(.{309})", "$1\n"));
}
+14  A: 

Your code has two problems:

  1. You're loading the entire file into memory at once, assuming it is a single line so you'll need at least 200MB of heap space for that; and

  2. It's a horribly inefficient way of adding newlines to use a regex like that. The straightforward code solution will be an order of magnitude faster.

Both of these problems are easily fixed.

Use a FileReader and FileWriter to load 309 characters at a time, append a newline and write those out.

Update: added a test of both character-by-character and buffered reading. The buffered reading actually adds a lot of complexity because you need to cater for the possible (but typically exceedingly rare) situation where a read() returns less bytes than you ask for and there are still bytes to read.

Firstly the simple version:

private static void charRead(boolean verifyHash) {
  Reader in = null;
  Writer out = null;
  long start = System.nanoTime();
  long wrote = 0;
  MessageDigest md = null;
  try {
    if (verifyHash) {
      md = MessageDigest.getInstance("SHA1");
    }
    in = new BufferedReader(new FileReader(IN_FILE));
    out = new BufferedWriter(new FileWriter(CHAR_FILE));
    int count = 0;
    for (int c = in.read(); c != -1; c = in.read()) {
      if (verifyHash) {
        md.update((byte) c);
      }
      out.write(c);
      wrote++;
      if (++count >= COUNT) {
        if (verifyHash) {
          md.update((byte) '\n');
        }
        out.write("\n");
        wrote++;
        count = 0;
      }
    }
  } catch (IOException e) {
    throw new RuntimeException(e);
  } catch (NoSuchAlgorithmException e) {
    throw new RuntimeException(e);
  } finally {
    safeClose(in);
    safeClose(out);
    long end = System.nanoTime();
    System.out.printf("Created %s size %,d in %,.3f seconds. Hash: %s%n",
        CHAR_FILE, wrote, (end - start) / 1000000000.0d, hash(md, verifyHash));
  }
}

And the "block" version:

private static void blockRead(boolean verifyHash) {
  Reader in = null;
  Writer out = null;
  long start = System.nanoTime();
  long wrote = 0;
  MessageDigest md = null;
  try {
    if (verifyHash) {
      md = MessageDigest.getInstance("SHA1");
    }
    in = new BufferedReader(new FileReader(IN_FILE));
    out = new BufferedWriter(new FileWriter(BLOCK_FILE));
    char[] buf = new char[COUNT + 1]; // leave a space for the newline
    int lastRead = in.read(buf, 0, COUNT); // read in 309 chars at a time
    while (lastRead != -1) { // end of file
      // technically less than 309 characters may have been read
      // this is very unusual but possible so we need to keep
      // reading until we get all the characters we want
      int totalRead = lastRead;
      while (totalRead < COUNT) {
        lastRead = in.read(buf, totalRead, COUNT - totalRead);
        if (lastRead == -1) {
          break;
        } else {
          totalRead++;
        }
      }

      // if we get -1, it'll eventually signal an exit but first
      // we must write any characters we have read
      // note: it is assumed that the trailing number, which may be
      // less than 309 will still have a newline appended. this may
      // note be the case
      if (totalRead == COUNT) {
        buf[totalRead++] = '\n';
      }
      if (totalRead > 0) {
        out.write(buf, 0, totalRead);
        if (verifyHash) {
          md.update(new String(buf, 0, totalRead).getBytes("UTF-8"));
        }
        wrote += totalRead;
      }

      // don't try and read again if we've already hit EOF
      if (lastRead != -1) {
        lastRead = in.read(buf, 0, 309);
      }
    }
  } catch (IOException e) {
    throw new RuntimeException(e);
  } catch (NoSuchAlgorithmException e) {
    throw new RuntimeException(e);
  } finally {
    safeClose(in);
    safeClose(out);
    long end = System.nanoTime();
    System.out.printf("Created %s size %,d in %,.3f seconds. Hash: %s%n",
        CHAR_FILE, wrote, (end - start) / 1000000000.0d, hash(md, verifyHash));
  }
}

And a method to create a test file:

private static void createFile() {
  Writer out = null;
  long start = System.nanoTime();
  try {
    out = new BufferedWriter(new FileWriter(IN_FILE));
    Random r = new Random();
    for (int i = 0; i < SIZE; i++) {
      out.write(CHARS[r.nextInt(CHARS.length)]);
    }
  } catch (IOException e) {
    throw new RuntimeException(e);
  } finally {
    safeClose(out);
    long end = System.nanoTime();
    System.out.printf("Created %s size %,d in %,.3f seconds%n",
      IN_FILE, SIZE, (end - start) / 1000000000.0d);
  }
}

These all assume:

private static final int SIZE = 200000000;
private static final int COUNT = 309;
private static final char[] CHARS;
private static final char[] BYTES = new char[]{'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f'};
private static final String IN_FILE = "E:\\temp\\in.dat";
private static final String CHAR_FILE = "E:\\temp\\char.dat";
private static final String BLOCK_FILE = "E:\\temp\\block.dat";

static {
  char[] chars = new char[1000];
  int nchars = 0;
  for (char c = 'a'; c <= 'z'; c++) {
    chars[nchars++] = c;
    chars[nchars++] = Character.toUpperCase(c);
  }
  for (char c = '0'; c <= '9'; c++) {
    chars[nchars++] = c;
  }
  chars[nchars++] = ' ';
  CHARS = new char[nchars];
  System.arraycopy(chars, 0, CHARS, 0, nchars);
}

Running this test:

public static void main(String[] args) {
  if (!new File(IN_FILE).exists()) {
    createFile();
  }
  charRead(true);
  charRead(true);
  charRead(false);
  charRead(false);
  blockRead(true);
  blockRead(true);
  blockRead(false);
  blockRead(false);
}

Gives this result (Intel Q9450, Windows 7 64bit, 8GB RAM, test run on 7200rpm 1.5TB drive):

Created E:\temp\char.dat size 200,647,249 in 29.690 seconds. Hash: 0x22ce9e17e17a67e5ea6f8fe929d2ce4780e8ffa4
Created E:\temp\char.dat size 200,647,249 in 18.177 seconds. Hash: 0x22ce9e17e17a67e5ea6f8fe929d2ce4780e8ffa4
Created E:\temp\char.dat size 200,647,249 in 7.911 seconds. Hash: (not calculated)
Created E:\temp\char.dat size 200,647,249 in 7.867 seconds. Hash: (not calculated)
Created E:\temp\char.dat size 200,647,249 in 8.018 seconds. Hash: 0x22ce9e17e17a67e5ea6f8fe929d2ce4780e8ffa4
Created E:\temp\char.dat size 200,647,249 in 7.949 seconds. Hash: 0x22ce9e17e17a67e5ea6f8fe929d2ce4780e8ffa4
Created E:\temp\char.dat size 200,647,249 in 3.958 seconds. Hash: (not calculated)
Created E:\temp\char.dat size 200,647,249 in 3.909 seconds. Hash: (not calculated)

Conclusion: the SHA1 hash verification is really expensive, which is why I ran versions with and without. Basically after warm up the "efficient" version is only about 2x as fast. I guess by this time the file is effectively in memory.

If I reverse the order of the block and char reads, the result is:

Created E:\temp\char.dat size 200,647,249 in 8.071 seconds. Hash: 0x22ce9e17e17a67e5ea6f8fe929d2ce4780e8ffa4
Created E:\temp\char.dat size 200,647,249 in 8.087 seconds. Hash: 0x22ce9e17e17a67e5ea6f8fe929d2ce4780e8ffa4
Created E:\temp\char.dat size 200,647,249 in 4.128 seconds. Hash: (not calculated)
Created E:\temp\char.dat size 200,647,249 in 3.918 seconds. Hash: (not calculated)
Created E:\temp\char.dat size 200,647,249 in 18.020 seconds. Hash: 0x22ce9e17e17a67e5ea6f8fe929d2ce4780e8ffa4
Created E:\temp\char.dat size 200,647,249 in 17.953 seconds. Hash: 0x22ce9e17e17a67e5ea6f8fe929d2ce4780e8ffa4
Created E:\temp\char.dat size 200,647,249 in 7.879 seconds. Hash: (not calculated)
Created E:\temp\char.dat size 200,647,249 in 8.016 seconds. Hash: (not calculated)

It's interesting that the character-by-character version takes a far bigger initial hit on the first read of the file.

So, as per usual, it's a choice between efficiency and simplicity.

cletus
Awesome, this worked great! Thanks a lot!
Jesse
+2  A: 

Open it and read a character at a time, and write that character to where it needs to go. Keep a counter and every time the counter is large enough, write out a newline and set the counter to zero.

Thorbjørn Ravn Andersen
-1. Reading one char at a time will be painfully slow.
JSBangs
Then wrap it in a BufferedReader. I was keeping it simple.
Thorbjørn Ravn Andersen
+1  A: 

Not sure how much better this solution is but you could always read it in character by character.

  1. Read in 309 characters and write to file. Not sure if you can do this at once or if you have to do it by a single character at a time
  2. After writing the 309th character output a newline into the file
  3. Repeat

For example (using this site):

FileInputStream fis = new FileInputStream(file);
char current;
int counter = 0
   while (fis.available() > 0) {
      current = (char) fis.read();
      counter++;
      // output current to file
      if ((counter%309) = 0) {
         //output newline character
      }
   }
Kyra
A: 

Wrap your FileReader in a BufferedReader, and then keep looping, reading 309 characters at a time.

Something like (not tested):

BufferedReader r = new BufferedReader(new FileReader("yourfile.txt"), 1024);
boolean done = false;
char[] buffer = new char[309];
while(!done)
{
   int read = r.read(buffer,0,309);
   if(read > 0)
   {
     //write buffer to dfestination, appending newline
   }
   else
   {
        done = true;
   }
}
Visage
+1  A: 

Don't use a BufferedReader, which will keep much of the underlying file in memory. Use the FileReader directly, and then use the read() method to get exactly as much data as you need:

FileReader reader = new FileReader(fileName);
char[] buffer = new char[309];
int charsRead = 0;

while ((charsRead = reader.read(buffer, 0, buffer.length)) == buffer.length)
{
    System.out.println(new String(buffer));
}
if (charsRead > 0)
{
     // print any trailing chars
     System.out.println(new String(buffer, 0, charsRead));
}
JSBangs
You can set the size of the BufferedReader to avoid reading the whole thing at once.
Visage
-1: you are not _guaranteed_ that reader.read() fills the buffer.
Thorbjørn Ravn Andersen
`BufferedReader` doesn't read keep the entire file in memory. The problem is that if the file is one line then `readLine()` will, by definition, read in the entire file.
cletus
+2  A: 

Read into a byte array with length 309 and then write the bytes read:

   import java.io.*;



   public class Test {
      public static void main(String[] args) throws Exception  {
         InputStream in = null;
         byte[] chars = new byte[309];
         try   {
            in = new FileInputStream(args[0]);
            int read = 0;

            while((read = in.read(chars)) != -1)   {
               System.out.write(chars, 0, read);
               System.out.println("");
            }
         }finally {
            if(in != null)  {
               in.close();
            }
         }
      }

   }
naikus
Bytes might break data in multibyte encodings, such as utf-8 or utf-16. That is unspecified in the original question, but still. If the 309th byte is the first byte of a multibyte character, buh-bye.
DaveE
A: 

You could change your program to this:

 BufferedReader r = null;

 r = new BufferedReader(new FileReader(fileName));
 char[] data = new char[309];

 while (r.read(data, 0, 309) > 0) {
     System.out.println(new String(data) + "\n");
 }

This is from my head and not tested.

Jaka