Your code has two problems:
You're loading the entire file into memory at once, assuming it is a single line so you'll need at least 200MB of heap space for that; and
It's a horribly inefficient way of adding newlines to use a regex like that. The straightforward code solution will be an order of magnitude faster.
Both of these problems are easily fixed.
Use a FileReader
and FileWriter
to load 309 characters at a time, append a newline and write those out.
Update: added a test of both character-by-character and buffered reading. The buffered reading actually adds a lot of complexity because you need to cater for the possible (but typically exceedingly rare) situation where a read()
returns less bytes than you ask for and there are still bytes to read.
Firstly the simple version:
private static void charRead(boolean verifyHash) {
Reader in = null;
Writer out = null;
long start = System.nanoTime();
long wrote = 0;
MessageDigest md = null;
try {
if (verifyHash) {
md = MessageDigest.getInstance("SHA1");
}
in = new BufferedReader(new FileReader(IN_FILE));
out = new BufferedWriter(new FileWriter(CHAR_FILE));
int count = 0;
for (int c = in.read(); c != -1; c = in.read()) {
if (verifyHash) {
md.update((byte) c);
}
out.write(c);
wrote++;
if (++count >= COUNT) {
if (verifyHash) {
md.update((byte) '\n');
}
out.write("\n");
wrote++;
count = 0;
}
}
} catch (IOException e) {
throw new RuntimeException(e);
} catch (NoSuchAlgorithmException e) {
throw new RuntimeException(e);
} finally {
safeClose(in);
safeClose(out);
long end = System.nanoTime();
System.out.printf("Created %s size %,d in %,.3f seconds. Hash: %s%n",
CHAR_FILE, wrote, (end - start) / 1000000000.0d, hash(md, verifyHash));
}
}
And the "block" version:
private static void blockRead(boolean verifyHash) {
Reader in = null;
Writer out = null;
long start = System.nanoTime();
long wrote = 0;
MessageDigest md = null;
try {
if (verifyHash) {
md = MessageDigest.getInstance("SHA1");
}
in = new BufferedReader(new FileReader(IN_FILE));
out = new BufferedWriter(new FileWriter(BLOCK_FILE));
char[] buf = new char[COUNT + 1]; // leave a space for the newline
int lastRead = in.read(buf, 0, COUNT); // read in 309 chars at a time
while (lastRead != -1) { // end of file
// technically less than 309 characters may have been read
// this is very unusual but possible so we need to keep
// reading until we get all the characters we want
int totalRead = lastRead;
while (totalRead < COUNT) {
lastRead = in.read(buf, totalRead, COUNT - totalRead);
if (lastRead == -1) {
break;
} else {
totalRead++;
}
}
// if we get -1, it'll eventually signal an exit but first
// we must write any characters we have read
// note: it is assumed that the trailing number, which may be
// less than 309 will still have a newline appended. this may
// note be the case
if (totalRead == COUNT) {
buf[totalRead++] = '\n';
}
if (totalRead > 0) {
out.write(buf, 0, totalRead);
if (verifyHash) {
md.update(new String(buf, 0, totalRead).getBytes("UTF-8"));
}
wrote += totalRead;
}
// don't try and read again if we've already hit EOF
if (lastRead != -1) {
lastRead = in.read(buf, 0, 309);
}
}
} catch (IOException e) {
throw new RuntimeException(e);
} catch (NoSuchAlgorithmException e) {
throw new RuntimeException(e);
} finally {
safeClose(in);
safeClose(out);
long end = System.nanoTime();
System.out.printf("Created %s size %,d in %,.3f seconds. Hash: %s%n",
CHAR_FILE, wrote, (end - start) / 1000000000.0d, hash(md, verifyHash));
}
}
And a method to create a test file:
private static void createFile() {
Writer out = null;
long start = System.nanoTime();
try {
out = new BufferedWriter(new FileWriter(IN_FILE));
Random r = new Random();
for (int i = 0; i < SIZE; i++) {
out.write(CHARS[r.nextInt(CHARS.length)]);
}
} catch (IOException e) {
throw new RuntimeException(e);
} finally {
safeClose(out);
long end = System.nanoTime();
System.out.printf("Created %s size %,d in %,.3f seconds%n",
IN_FILE, SIZE, (end - start) / 1000000000.0d);
}
}
These all assume:
private static final int SIZE = 200000000;
private static final int COUNT = 309;
private static final char[] CHARS;
private static final char[] BYTES = new char[]{'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f'};
private static final String IN_FILE = "E:\\temp\\in.dat";
private static final String CHAR_FILE = "E:\\temp\\char.dat";
private static final String BLOCK_FILE = "E:\\temp\\block.dat";
static {
char[] chars = new char[1000];
int nchars = 0;
for (char c = 'a'; c <= 'z'; c++) {
chars[nchars++] = c;
chars[nchars++] = Character.toUpperCase(c);
}
for (char c = '0'; c <= '9'; c++) {
chars[nchars++] = c;
}
chars[nchars++] = ' ';
CHARS = new char[nchars];
System.arraycopy(chars, 0, CHARS, 0, nchars);
}
Running this test:
public static void main(String[] args) {
if (!new File(IN_FILE).exists()) {
createFile();
}
charRead(true);
charRead(true);
charRead(false);
charRead(false);
blockRead(true);
blockRead(true);
blockRead(false);
blockRead(false);
}
Gives this result (Intel Q9450, Windows 7 64bit, 8GB RAM, test run on 7200rpm 1.5TB drive):
Created E:\temp\char.dat size 200,647,249 in 29.690 seconds. Hash: 0x22ce9e17e17a67e5ea6f8fe929d2ce4780e8ffa4
Created E:\temp\char.dat size 200,647,249 in 18.177 seconds. Hash: 0x22ce9e17e17a67e5ea6f8fe929d2ce4780e8ffa4
Created E:\temp\char.dat size 200,647,249 in 7.911 seconds. Hash: (not calculated)
Created E:\temp\char.dat size 200,647,249 in 7.867 seconds. Hash: (not calculated)
Created E:\temp\char.dat size 200,647,249 in 8.018 seconds. Hash: 0x22ce9e17e17a67e5ea6f8fe929d2ce4780e8ffa4
Created E:\temp\char.dat size 200,647,249 in 7.949 seconds. Hash: 0x22ce9e17e17a67e5ea6f8fe929d2ce4780e8ffa4
Created E:\temp\char.dat size 200,647,249 in 3.958 seconds. Hash: (not calculated)
Created E:\temp\char.dat size 200,647,249 in 3.909 seconds. Hash: (not calculated)
Conclusion: the SHA1 hash verification is really expensive, which is why I ran versions with and without. Basically after warm up the "efficient" version is only about 2x as fast. I guess by this time the file is effectively in memory.
If I reverse the order of the block and char reads, the result is:
Created E:\temp\char.dat size 200,647,249 in 8.071 seconds. Hash: 0x22ce9e17e17a67e5ea6f8fe929d2ce4780e8ffa4
Created E:\temp\char.dat size 200,647,249 in 8.087 seconds. Hash: 0x22ce9e17e17a67e5ea6f8fe929d2ce4780e8ffa4
Created E:\temp\char.dat size 200,647,249 in 4.128 seconds. Hash: (not calculated)
Created E:\temp\char.dat size 200,647,249 in 3.918 seconds. Hash: (not calculated)
Created E:\temp\char.dat size 200,647,249 in 18.020 seconds. Hash: 0x22ce9e17e17a67e5ea6f8fe929d2ce4780e8ffa4
Created E:\temp\char.dat size 200,647,249 in 17.953 seconds. Hash: 0x22ce9e17e17a67e5ea6f8fe929d2ce4780e8ffa4
Created E:\temp\char.dat size 200,647,249 in 7.879 seconds. Hash: (not calculated)
Created E:\temp\char.dat size 200,647,249 in 8.016 seconds. Hash: (not calculated)
It's interesting that the character-by-character version takes a far bigger initial hit on the first read of the file.
So, as per usual, it's a choice between efficiency and simplicity.