views:

145

answers:

4

I have a program to process very large files. Now I need to show a progress bar to show the progress of the processing. The program works on a word level, read one line at a time, splitting it into words and processing the words one by one. So while the programs runs, it knows the count of the words processed. If somehow it knows the word count of the file beforehand, it can easily calculate the progress.

The problem is that, the files I am dealing with may be very large and hence it's not a good idea to process the file twice, once to get the total word count and next to run actual processing code.

So I am trying to write a code which can estimate the word count of a file by reading a small portion of it. This is what I have come up with (in Clojure):

(defn estimated-word-count [file]
  (let [^java.io.File file (as-file file)
        ^java.io.Reader rdr (reader file)
        buffer (char-array 1000)
        chars-read (.read rdr buffer 0 1000)]
    (.close rdr)
    (if (= chars-read -1)
      0
      (* 0.001 (.length file) 
        (-> (String. buffer 0 chars-read) tokenize-line count)))))

This code reads the first 1000 characters from the file, creates a String from it, tokenizes it to get words, counts the words and then estimates the word count of the file by multiplying it with the length of the file and dividing it by 1000.

When I run this code on a file with English text, I get almost correct word count. But, when I run this on a file with Hindi text (encoded in UTF-8), it return almost double of the real word count.

I understand that this issue is because of the encoding. So is there any way to solve it?

SOLUTION

As suggested by Frank, I determine the byte count of the first 10000 characters and use it to estimate the word count of the file.

(defn chars-per-byte [^String s]
  (/ (count s) ^Integer (count (.getBytes s "UTF-8"))))

(defn estimate-file-word-count [file]
  (let [file (as-file file)
        rdr (reader file)
        buffer (char-array 10000)
        chars-read (.read rdr buffer 0 10000)]
    (.close rdr)
    (if (= chars-read -1)
      0
      (let [s (String. buffer 0 chars-read)]
        (* (/ 1.0 chars-read) (.length file) (chars-per-byte s)
          (-> s tokenize-line count))))))

Note that this assume UTF-8 encoding. Also, I decided to read first 10000 chars because it gives a better estimate.

A: 

Can't you compensate for the average number of bytes/char with the ratio of chars-read/bytes-read?

Peter Tillemans
+11  A: 

Why not just make the progress bar based on the bytes processed instead of a word count. You know the size upfront, and then the major difficulty is just getting the bytes per word or bytes per line as you process them.

The easiest way to do this is for each line you read in, use getBytes, providing the character encoding that the file was written in, and then get the length of that. This may not be the most efficient way of doing it, but it will be very accurate and simple to do.

Alternatively, you could read in a fixed number of bytes at a time, and then maintain a buffer yourself to handle partial words and line breaks.

Russell Leggett
+2  A: 

In UTF-8, Hindi text averages to about two bytes per char. You seem to read 1000 chars, and apply the calculation to the file length in bytes. So, if you happen to know the language beforehand, you could compensate for the char to byte ratio.

Otherwise, you could determine the byte count of the first 100 chars to estimate the ratio. I do not know Clojure very well, but maybe you can determine the current position in the file as a byte count with some variant of a seek function after having read 1000 chars?

Frank
A: 

How accurate does your progress bar need to be? I'm guessing the answer isn't "mission critical to the 0.1% accurate". In that case, just check the size of the file and it's encoding and have hard-coded AVG_BYTES_PER_WORD to use with your progress bar.

bluedevil2k